The partial checksum for the TCP/UDP pseudo-header is calculated and then it is
added to the checksum for the rest of the packet. I started to write the
functions for such incremental checksum calculation but then I saw they are
already implemented in libdnet.
packet is OK from the get-go rather than running basic checks of it's own.
In a nutshell this patch checks to make sure:
1) there is enough room for an IP header in the amount of bytes read
2) the IP version number is correct
3) the IP length fields are at least as big as the standard header
4) the IP packet received isn't a fragment, or is the initial fragment
5) that next level headers seem reasonable
For TCP, this checks that there is enough room for the header in the number
of bytes read, and that any option lengths are correct. The options checked
are MSS, WScale, SackOK, Sack, and Timestamp.
This also fixes a bug I discovered while testing. Since the Ethernet CRC
(and other datalink-layer data) could be read and counted, it was being
returned that there was more IP packet than there really was. This didn't
cause an overrun of the buffer or anything, just that garbage data could have
easily been read instead of real packet data. Now, if validity is checked for
and the number of total bytes read is larger than the IP's length, the length
is set to the IP header's total length field.
This seems to work great after doing what testing I could. It's been out on
nmap-dev for a couple of weeks without any bad reports (none at all for that
matter). I reviewed this patch again before committing and it looks good as
well.
not used before because of how the logic for o.spoofsource and o.device is
handled in nmap.cc.) Its basic purpose remains in the function ipaddr2devname.
has been messed up for a while and I was having trouble reading it. I changed
it to use the mix of 8-wide tabs and spaces used by most of the rest of the
file.
only code left in Nmap that still uses rand() is in the Lua math
library. Perhaps at some point we'll need to expose high-quality random
numbers to Lua via our custom nmap library.
was always falling back to the system ARP cache. Of course this
raises the question of whether NmapArpCache is needed in the first
place. [Daniel Roethlisberger]
Remove special-purpose log functions for graphing congestion control and other t
hings. There's enough information provided by -d3.
Update the congestion control graph program and add a program for graphing probe
s and drops.
Increase the initial ccthresh from 50 to 75.
Change how much the congestion threshold drops on packet drops.
Print group timing stats with -d2 and individual host timing stats with -d3.
Bump up the cc-graph.sh y axis limit to 80.
Put graphs in the same directory as their log file.
Go ahead and adjust timing for ICMP destination unreachables. I'm going to commi
t and experimental change to the congestion control that doesn't rely on this an
y more.
Scale group congestion control increments by the inverse of the packet
receipt ratio. This gives great performance without ignoring ICMP
destintation unreachable drops. This may be the breakthrough we've been
looking for.
I'll probably send a message about this later today. For information and
graphs right now, see
http://www.bamsoftware.com/wiki/Nmap/ResponseRateScaledCongestionControl.
Sorry it's only in my nmap-massping-migration branch for now, but please
give it a try.
Only -d2 is now needed for cc-graph.sh.
Put a cap of 50 on the cwnd scaling factor.
Fix up the order of things in the packet_ratio debugging output.
Move the packet_ratio debugging output to printAnyStats and rearrange the order
in which things are printed.
Put a header with the scan args at the top of the probes-graph.sh data files.
Add a function pcap_print_stats that shows the number of received and dropped pa
ckets for a descriptor.
Call pcap_print_stats after a run of ultra_scan.
Increase the congestion window less aggressively than before with -T4 and -T5 (s
till more aggressivly than with lesser timing values).
This is the merging of the code that was previously in
/nmap-exp/david/nmap-massping-migration. These are all the big changes
that get rid of massping in favor of doing host discovery using
ultra_scan.
For now, there is a toggle that turns these new changes off. Undefine
NEW_MASSPING in targets.cc to go back to the old code. All of that will
be deleted eventually.
There are likely a few more changes that will be made to this system in
the near future. Those will be made in
/nmap-exp/david/nmap-massping-migration and merged back.
Don't release this just yet, because I'm going to make a few more
commits real quick to remove some debugging stuff.
(Note to self: this merge back was from r5693 in
/nmap-exp/david/nmap-massping-migration.)