增加 TCP 窗口大小
我对在应用程序中增加 TCP 窗口大小有一些疑问.在我的 C++ 软件应用程序中,我们使用 TCP/IP 阻塞套接字从客户端向服务器发送大小约为 1k 的数据包.最近我遇到了这个概念 TCP 窗口大小.所以我尝试使用 setsockopt()
将 SO_SNDBUF
和 SO_RCVBUF
的值增加到 64K.增加此值后,WAN 连接的性能有所提高,但 LAN 连接的性能没有提高.
I have some doubts over increasing TCP Window Size in application. In my C++ software application, we are sending data packets of size around 1k from client to server using TCP/IP blocking socket. Recently I came across this concept TCP Window Size. So I tried increasing the value to 64K using setsockopt()
for both SO_SNDBUF
and SO_RCVBUF
. After increasing this value, I get some improvements in performance for WAN connection but not in LAN connection.
根据我对 TCP 窗口大小的理解,
As per my understanding in TCP Window Size,
客户端将数据包发送到服务器.达到此 TCP 窗口大小后,它将等待以确保从服务器接收到窗口大小中第一个数据包的 ACK.在 WAN 连接的情况下,由于 RTT 的延迟约为 100 毫秒,ACK 会从服务器延迟到客户端.所以在这种情况下,增加 TCP Window Size 可以补偿 ACK 等待时间,从而提高性能.
Client will send data packets to server. Upon reaching this TCP Window Size, it will wait to make sure ACK received from the server for the first packet in the window size. In case of WAN connection, ACK is getting delayed from the server to the client because of latency in RTT of around 100ms. So in this case, increasing TCP Window Size compensates ACK wait time and thereby improving performance.
我想了解我的应用程序的性能如何提高.
I want to understand how the performance improves in my application.
在我的应用程序中,即使在套接字级别使用 setsockopt
增加了 TCP 窗口大小(发送和接收缓冲区),我们仍然保持相同的 1k 数据包大小(即我们从客户端到服务器在单个套接字发送).我们还禁用了 Nagle 算法(将小数据包合并为大数据包的内置选项,从而避免频繁的套接字调用).
In my application, even though TCP Window Size (Both Send and Receive Buffer) is increased using setsockopt
at socket level, we still maintain the same packet size of 1k (i.e the bytes we send from client to server in a single socket send). Also we disabled Nagle algorithm (inbuilt option to consolidate small packets into a large packet thereby avoiding frequent socket call).
我的疑惑如下:
由于我使用阻塞套接字,对于每个 1k 的数据包发送,如果 ACK 不是来自服务器,它应该阻塞.那么单独提高WAN连接中的TCP window Size后性能如何提升呢?如果我误解了 TCP 窗口大小的概念,请纠正我.
Since I am using blocking socket, for each data packet send of 1k, it should block if ACK doesn't come from the server. Then how does the performance improve after improving the TCP window Size in WAN connection alone ? If I misunderstood the concept of TCP Window Size, please correct me.
为了发送 64K 的数据,我相信我仍然需要调用套接字发送函数 64 次(因为我通过阻塞套接字每次发送发送 1k),即使我将 TCP 窗口大小增加到 64K.请确认这一点.
For sending 64K of data, I believe I still need to call socket send function 64 times ( since i am sending 1k per send through blocking socket) even though I increased my TCP Window Size to 64K. Please confirm this.
使用 RFC 1323 算法启用窗口缩放的 TCP 窗口大小的最大限制是多少?
What is the maximum limit of TCP window size with windows scaling enabled with RFC 1323 algorithm ?
我的英语不太好.如果您无法理解以上任何内容,请告诉我.
I am not so good in my English. If you couldn't understand any of the above, please let me know.
推荐答案
首先,从你的问题中可以看出一个很大的误解:TCP 窗口大小是由 SO_SNDBUF
和SO_RCVBUF
.这不是真的.
First of all, there is a big misconception evident from your question: that the TCP window size is what is controlled by SO_SNDBUF
and SO_RCVBUF
. This is not true.
简而言之,TCP 窗口大小决定了在收到尚未确认的最早数据包的确认之前,您的网络堆栈愿意将多少后续数据(数据包)放入网络.
In a nutshell, the TCP window size determines how much follow-up data (packets) your network stack is willing to put on the wire before receiving acknowledgement for the earliest packet that has not been acknowledged yet.
TCP 堆栈必须接受并考虑这样一个事实,即一旦确定数据包在传输过程中丢失或损坏,从那个开始发送的每个数据包,都必须重新-sent 因为数据包可能只能由接收者按顺序确认.因此,允许同时存在太多未确认的数据包会推测性地消耗连接带宽:不能保证所使用的带宽实际上会产生任何有用的东西.
The TCP stack has to live with and account for the fact that once a packet has been determined to be lost or mangled during transmission, every packet sent, from that one onwards, has to be re-sent since packets may only be acknowledged in order by the receiver. Therefore, allowing too many unacknowledged packets to exist at the same time consumes the connection's bandwidth speculatively: there is no guarantee that the bandwidth used will actually produce anything useful.
另一方面,不允许同时允许多个未确认的数据包只会扼杀具有高 带宽延迟产品.因此,TCP 堆栈必须在无益地使用带宽和没有足够积极地驱动管道(从而允许其部分容量未被使用)之间取得平衡.
On the other hand, not allowing multiple unacknowledged packets at the same time would simply kill the bandwidth of connections that have a high bandwidth-delay product. Therefore, the TCP stack has to strike a balance between using up bandwidth for no benefit and not driving the pipe aggressively enough (and thus allowing some of its capacity to go unused).
TCP 窗口大小决定了在哪里达到平衡.
The TCP window size determines where this balance is struck.
它们控制网络堆栈为服务套接字保留的缓冲区空间量.这些缓冲区分别用于累积堆栈尚未能够放在线路上的传出数据和已从线路接收但尚未被您的应用程序读取的数据.
They control the amount of buffer space that the network stack has reserved for servicing your socket. These buffers serve to accumulate outgoing data that the stack has not yet been able to put on the wire and data that has been received from the wire but not yet read by your application respectively.
如果这些缓冲区之一已满,您将无法发送或接收更多数据,直到释放一些空间.请注意,这些缓冲区仅影响网络堆栈如何处理网络接口近"端的数据(在发送之前或到达之后),而 TCP 窗口会影响堆栈如何管理远"端的数据接口的一侧(即在电线上).
If one of these buffers is full you won't be able to send or receive more data until some space is freed. Note that these buffers only affect how the network stack handles data on the "near" side of the network interface (before they have been sent or after they have arrived), while the TCP window affects how the stack manages data on the "far" side of the interface (i.e. on the wire).
没有.如果是这种情况,那么每个发送的数据包都会产生往返延迟,这将完全破坏高延迟连接的带宽.
No. If that were the case then you would incur a roundtrip delay for each packet sent, which would totally destroy the bandwidth of connections with high latency.
是的,但这与 TCP 窗口大小或分配给该套接字的缓冲区大小无关.
Yes, but that has nothing to do with either the TCP window size or with the size of the buffers allocated to that socket.
根据我能找到的所有来源(示例),缩放允许窗口达到最大 1GB.
According to all sources I have been able to find (example), scaling allows the window to reach a maximum size of 1GB.
相关文章