guide. They don't honor scan delay and may violate congestion control.
Both this things should be fixed. I was going to do it by having
get_next_target_probe just return the same probe multiple times, and
then either extend struct probespec to include a source address or have
sendIPScanProbe keep track of the decoy index and fill in source
addresses. But I was stopped by timing pings. Those should certainly be
decoyed, but in the code they are just sent as they are needed, and
don't have a dispatching function to modify. What would be good is a
global queue of probes waiting to be sent you could just insert all your
spoofed probes into, and then let the rest of the code take care of
scheduling them.
This change keeps a list of probes awaiting retransmit so that
doAnyOutstandingRetransmits doesn't have to search for them. At high
scan rates this function could take 100 ms or more. Now I have measured
it to take 2 ms or less.
The variable num_probes_waiting_retransmit has been renamed
num_probes_timed_out to better explain its purpose. This list of probes
that can be retransmitted immediately is called
probes_waiting_retransmits, but not all timed-out probes can be
retransmitted immediately. I've done my best to explain the distinction
in comments.
I thought long and hard about how to address this issue, and this is
what I decided on. But of course, every little optimization brings some
complexity and the chance of making a mistake. I'd appreciate someone
taking a look at this change.
foudn that five files can be open on Mac OS X: stdin, stdout, stderr, /dev/tty,
and /private/var/run/utmpx. This could cause a non-root scan at a high scan
rateto fail with the message "Too many open files". I was able to cause this
with "nmap --min-rate 5000 localhost -p-".
That command still fails with the same error message, but for an entirely
different reason. After a while, one of the connect calls fails with an errno of
22 = EINVAL, Invalid argument. Whatever this means, the socket doesn't get
closed, Nmap just reports a "Strange error from connect". The socket is still
open but Nmap doesn't include it in its count of open sockets, so it's off by
one (or more, conceivably). This allows it to try to open one too many sockets
and bomb with an error message.
Note that running as non-root is important both because it uses a connect scan
and because non-root users have a lower limit on open files.
I've tried just closing the socket when EINVAL is returned, and that fixes the
problem. But that's likely to differ on different systems. Plus I don't know why
EINVAL is returned; maybe it's an OS bug. This only affects localhost scans and
only at high scan rates, so I'm leaving it alone.
enough for host discovery, and 100 doesn't give much benefit because the probe
timeouts increase to slow the scan down. While it's faster in some cases, it
also increases the variance in scan times. For more analysis see
http://www.bamsoftware.com/wiki/Nmap/PerformanceGraphs#timeouts.
scales per-host congestion control increments in the same way those for the
group already are. This speeds scanning in some cases (particularly with few
hosts, when the group congestion control is not the limiting factor). I'm going
to experiment with raising the increment cap to allow this to have more of an
effect.
Scale host congestion control variables similarly to the way group congestion
control is scaled. For the rationale see
http://www.bamsoftware.com/wiki/Nmap/PerformanceGraphs#host-scaled.
Host cc_scale should use (numprobes_sent + numpings_sent), not (numprobes_sent + numprobes_sent).
Remove special-purpose log functions for graphing congestion control and other t
hings. There's enough information provided by -d3.
Update the congestion control graph program and add a program for graphing probe
s and drops.
Increase the initial ccthresh from 50 to 75.
Change how much the congestion threshold drops on packet drops.
Print group timing stats with -d2 and individual host timing stats with -d3.
Bump up the cc-graph.sh y axis limit to 80.
Put graphs in the same directory as their log file.
Go ahead and adjust timing for ICMP destination unreachables. I'm going to commi
t and experimental change to the congestion control that doesn't rely on this an
y more.
Scale group congestion control increments by the inverse of the packet
receipt ratio. This gives great performance without ignoring ICMP
destintation unreachable drops. This may be the breakthrough we've been
looking for.
I'll probably send a message about this later today. For information and
graphs right now, see
http://www.bamsoftware.com/wiki/Nmap/ResponseRateScaledCongestionControl.
Sorry it's only in my nmap-massping-migration branch for now, but please
give it a try.
Only -d2 is now needed for cc-graph.sh.
Put a cap of 50 on the cwnd scaling factor.
Fix up the order of things in the packet_ratio debugging output.
Move the packet_ratio debugging output to printAnyStats and rearrange the order
in which things are printed.
Put a header with the scan args at the top of the probes-graph.sh data files.
Add a function pcap_print_stats that shows the number of received and dropped pa
ckets for a descriptor.
Call pcap_print_stats after a run of ultra_scan.
Increase the congestion window less aggressively than before with -T4 and -T5 (s
till more aggressivly than with lesser timing values).
until now that Visual C++ made a bunch of whitespace changes in an otherwise
small diff. I'll re-commit the changes in a moment without the whitespace
changes.
Print group timing stats with -d2 and individual host timing stats with -d3.
Change how much the congestion threshold drops on packet drops.
Increase the initial ccthresh from 50 to 75.
made such a big UI change with no discussion. Anyway, the message should have
gone within the ((hss->target->flags & HOST_UP) == 0) block so that the message
is printed only once per target.
Always update srtt, rttvar, and timeout for every response, even if we don't adjust congestion control or send delay variables.
Be more careful about checking gstats->sendOK when sending retransmits.
Previously, it was only checked once per traversal of the incomplete
hosts list, which meant that enough probes could be sent in a round to
exceed the congestion window. Explanatory pictures are at
http://www.bamsoftware.com/wiki/Nmap/PerformanceGraphs#retransmit-sendOK.
This needs some more testing to see what effect it has on scan times. My
instinct says it will slow them down, because retransmits will be sent
no faster than before, and retransmits will be more likely to be
responded to, leading to more drops. On the other hand, correctly
detecting a drop and marking a host up is better than blasting
retransmits faster than they can be responded to.
This is the merging of the code that was previously in
/nmap-exp/david/nmap-massping-migration. These are all the big changes
that get rid of massping in favor of doing host discovery using
ultra_scan.
For now, there is a toggle that turns these new changes off. Undefine
NEW_MASSPING in targets.cc to go back to the old code. All of that will
be deleted eventually.
There are likely a few more changes that will be made to this system in
the near future. Those will be made in
/nmap-exp/david/nmap-massping-migration and merged back.
Don't release this just yet, because I'm going to make a few more
commits real quick to remove some debugging stuff.
(Note to self: this merge back was from r5693 in
/nmap-exp/david/nmap-massping-migration.)