Shoot -- I forgot a question that I meant to ask earlier -- but now you brought it up!
RWIN should be an "EVEN multiple" of MSS for better speed and not simply a "multiple" of MSS -- correct????
And don't worry, I'm already doing a great job of boring people to death -- any one want some more binary number calculations?
[This message has been edited by rmrucker (edited 10-21-2000).]
Well, a RWIN of 2^30 is ~1Gigabyte, there would be no benefit of reserving a 1GB RAM buffer for such a thing...
It's true, RWIN should be an "even multiple" of MSS. You can also look at it as an even number, that's also multiple of MSS. The only case where you'd have to consider it is if you are using odd value for MSS and multiplying it by an odd number to get RWIN. As long as MSS is even, you can make RWIN any multiple of it.
372300 is a valid number as well, since it is an even multiple of MSS.
Besides, Microsoft doesn't make up the rules for the Internet anyway -- I beleive it is the Internet Engineering Task Force (the guys who publish the RFC's). Microsoft should NOT be your only source of information, please!
I agree 372300 is a valid "even multiple of MSS", but I don't think it meets the RFC 1323 Window Scaling criteria (as above).
I have also been taught the "multiple of MSS" is a 'myth', however, I remain skeptical...
I think people can make a pretty good argument that RWIN should be an "EVEN multiple" of MSS, and not just an "even number, that's also multiple of MSS".
[therealcableguy, et al -- on the outside chance that you are a) still reading this, and b) not already completely bored stiff, stop reading NOW!!]
Here is an UN-authorized excerpt from EHS Company's Reading Rooms (www.ehsco.com):
"It's important to note the bits-in-flight value alone is not the optimal window size. You must also consider the maximum segment size (MSS) in use on the connection, and multiply that value by even numbers until you exceed the value derived for the maximum bits-in-flight. This is due to the way in which TCP's Delayed Acknowledgment algorithm refrains from sending acknowledgments until two fully sized TCP segments have been received. If the sender does not transmit enough data in even multiples, the recipient will not return acknowledgments quickly, and the exchange will be jerky.
For example, if the segment size is 1,460 bytes (common for Ethernet and PPP connections), the window must be an even multiple of 1,460 bytes. If the TCP window is an odd multiple of the MSS, eventually the recipient will not receive two full-sized segments, thereby holding off the acknowledgment until the delayed acknowledgment timer expires.
Actually, the default window value should be at least four times the MSS because a smaller receive window would not foster a steady data flow. If the receive window were only twice the MSS, the sender would transmit two segments and then stop to wait for an acknowledgment while the segments worked their way through the network. Once received, the recipient's acknowledgment would also have to return to the sender before more data could be sent, causing further delay. Having a default window size four times the size of the MSS would at least allow the sender to transmit four segments, with the last segment being sent just as the acknowledgment for the first two segments was being returned."
You're right, according to RFC1323 the scale factor should be a power of 2, so it might be implemented by binary shift operations, that's why I agreed 373760 makes much more sense and chenged it in the patches, I'm not sure how I've managed to omit that earlier.
There are a couple of things that bother me in this excerpt you posted: "If the TCP window is an odd multiple of the MSS, eventually the recipient will not receive two full-sized segments, thereby holding off the acknowledgment until the delayed acknowledgment timer expires."
That is not entirely true for the maximum receive window, since the current receive window is a smaller buffer that "slides" within the max receive window and it can't reach such condition IMHO.
The final window size is a multiple (odd in this case) of the lower of the two MSS values. It appears a multiple of the lower MSS is used that will create a window size not less than you initial window size. In this case the 1322 * 31 = 40982. I checked this behavior with a number of other sites and it held true.
Navas Cable Modem-DSL Tuning Guide says that for latency between 100 ms and 200 ms (normal latency--normal dsl, "good" cable), you should be using a 32k rwin for maximum performance! for latencies above 200 ms, 64k is good (for poor dsl or cable connections with higher latencies). anything above 64k is really only for cable users with ****TY latency readings and BIG pipes. check tha beCk at http://cable-dsl.home.att.net/
Thanks TRS!! So "odd" multiples of MSS are "OK". But, does that mean they are as fast as even multiples?? Hmmm... if I had the time I could test it myself... But instead I spend my time answering other posts... Oh well, I know someone will address this issue.
[This message has been edited by rmrucker (edited 10-22-2000).]
If you follow Navas' ideas (sorry, I couldn't resist), then it is an urban myth.
However, if you actually do your research, you'll find that it's best and it makes a lot of sense to use a multiple of MSS.
BTW, if you search the Microsoft Knowledge Base, there are a number of references where it is clearly stated that RWIN should be a multiple of MSS for best results. Besides, if you read some RFCs all calculate RWIN based on MSS and it is common sense to use a number that will accommodate whole segments. Here is a recent example from Microsoft: "the value should be obtained by rounding up the TCPWindowSize to a multiple of the Ethernet Maximum Segment Size (MSS), which is 1460 for Ethernet" <-- http://support.microsoft.com/support.../Q263/0/88.ASP
[This message has been edited by Philip (edited 10-21-2000).]
Philip, not meaning to antagonize, but instead trying to understand... (btw, I am back from my party and I had a great time! Assume I had too much wine and, ignore all typos!!).
"Well, a RWIN of 2^30 is ~1Gigabyte, there would be no benefit of reserving a 1GB RAM buffer for such a thing..."
OK, that misses my point. IF you believe that there is no such thing as "too high an RWIN", why in the world would you ever recommend any number less than the maximum? If Windows is going to scale down any large number, then you should give the largest number possible, and let it scale down until it fits you system/connection.
For example, why stop at 372,300 or even 373,760?? If bigger is better, why not 747,520? or twice that big: 1,495,040?
OR, if you take it to the ULTIMATE extreme: 1,073,741,824??? If Windows scales down to whatever fits, then your recommended RWIN is too damn low!! Max it out to 1 billion and let it scale itself down to what ever number your need!
If you cannot specify "too high an RWIN", then you are a sucker for NOT specifying a tremendously huge RWIN!!! What's the point? Make it as big as possible -- it doesn't matter!
However, if on the contrary there such a thing as "too big an RWIN", then it behooves everyone to NOT choose an humongous RWIN.
The SetTcpWindowSize WMI class method is used to set the maximum TCP Receive Window size offered by the system. The receive window specifies the number of bytes a sender can transmit without receiving an acknowledgment. In general, larger receive windows improve performance over high delay and high bandwidth networks. For efficiency, the receive window should be an even multiple of the TCP Maximum Segment Size (MSS). This topic uses Visual Basic syntax.
Thanks dannjr. This seems to add support for the "EVEN multiple" of MSS theory...
I know this is probably a dumb thing to ask at this point, but I am still intrigued at how the number 372300 came about. It seems to be specific design decision to not continue to double the numbers (186,880 is 2*93440). Was there a reason behind this, or was the decision completely random?
(I know, not real important, but more 'human interest' kinda stuff).
[This message has been edited by rmrucker (edited 10-22-2000).]
1) When the scaling option is disabled, the actual receive window will be a multiple of the lower of the two MSS values sent during SYN and SYN/ACK. The actual receive window will be no less than the size of the desired receive window (the value use specified for the receive window the in the registry).
2) If you are not going to use scaling but want to use the largest receive window size possible you should choose a window size such
WindowSize = [integer [65535 / MTU] - 1] * MTU
Ex. MTU = 1452
WindowSize = [integer [65535 / 1452] - 1] * 1452
WindowSize = [integer [45.1343] - 1] * 1452
WindowSize = [45 - 1] * 1452 = 44 * 1452 = 63888
This is important because if one had chosen 45 * 1452 (65340) the actual receive window would not be able able to be atleast as large as your desired recieve window size unless both MTUs during handshaking were 1452.
3) When scaling is turned off the multiple is not guaranteed to be an even multiple of the lower MTU. You will not have any control over this as every host you hanshake with could have a different size MTU.
4) When scaling is turned on the receive window size present during SYN has absolutely no bearing on the actual window size that will be use. Nor will the MTU values as they do when scaling is turned off.
The actual window size is determined with the window size sent with the ACK packet in combination with the scaling factor sent during the SYN packet. The window size sent during the ACK segment is a righr shift of [scaling factor] bits.
Now Microsoft claims the actual size would be 65535 * 2^4 = 1048560. Internally to windows this may be true. The actual bytes allocated may be 1048560 but the RFC (RFC 1323) makes no statements regarding 65535 as the initial window size value to use when the scaling factor is 1 - 14. The sending host would have no way of know this. The sending how would only know the initial value of 32767 as send in each segment header from the receiver.
So what this all boils down to is that when scaling is used the multiple of MSS issues go out the window. You will always lose the least significant [scaling factor] bits of whatever window size you enter into the registry.
Ok. I've been silently reading and trying to learn something here, but I've just have to ask. If my MTU is 1492, my MSS is 1452, should my RWIN be a even multiple of my MSS ex. 1452*46=66792 or should I set my RWIN at 65535? My service is ADSL 1000/128. Thanx.
I've been double checking your numbers, and a lot of what you say makes sense. Could you teach me how to "packet-sniff" this information? I know there are packet-sniffing programs out there, but which are you using?
I am continuing to investigate this further. I think Tim is on to something.
Walk me through this...
If we now re-look at 372300, in Hex this would be 5AE4C. [I gather the appropriate notation of this is 0x5AE4C.]
The SYN packet advertises the window as the lower 16-bits, or 0xAE4C => 44620, and the scale factor in this case would be 3??
Because 372,300/65,535 = 5.6809... This rounds up to 8 (the next power of 2), or 2^3 (scale factor 3).
Now, would DSLR tweak test show the RWIN as 356960??? (44620 with a scale factor of 3?) (I'll check that out later).
Then the ACK packet would show 372300 right shifted three places, or 0xB5C9 = 46,537.
The true window size would then be 372296 (46,537*2^3).
Did I follow you through any of this, or am I just as lost as I ever was?
[This message has been edited by rmrucker (edited 10-23-2000).]