#2130 closed enhancement (fixed)
TCP_CORK
Reported by: | Antoine Martin | Owned by: | Smo |
---|---|---|---|
Priority: | major | Milestone: | 2.5 |
Component: | network | Version: | 2.4.x |
Keywords: | Cc: |
Description
Related to #619 and ticket:2121#comment:5.
We know when there are more chunks to be written to the socket, so we can use TCP_CORK.
Attachments (1)
Change History (11)
comment:1 Changed 2 years ago by
Owner: | changed from Antoine Martin to J. Max Mena |
---|
comment:2 Changed 2 years ago by
Owner: | changed from J. Max Mena to Jonathan Anthony |
---|
comment:3 Changed 21 months ago by
Owner: | changed from Jonathan Anthony to Smo |
---|
comment:4 Changed 20 months ago by
Owner: | changed from Smo to Antoine Martin |
---|
What do you think of this as a baseline for testing with bandwidth constraints.
I think I may need longer tests for these I was only testing with rgb and auto encodings.
comment:5 Changed 20 months ago by
Owner: | changed from Antoine Martin to Smo |
---|
I don't see the raw test data, what was the bandwidth constraint used?
Looks like the gtkperf test failed with CORK=0. (no data)
The only surprise so far is how the max-damage-latency is quite a bit higher and the min-quality is lower. It could be that using TCP_CORK
makes the network layer push back more aggressively - which is not a bad thing. (fits with the lower max-batch-delay which is more coarse grained than damage-latency)
We would need to get the round-trip latency figures to verify that.
As per ticket:619#comment:27, it would be useful to combine CORK with NODELAY.
comment:6 Changed 20 months ago by
Resolution: | → fixed |
---|---|
Status: | new → closed |
As per ticket:619#comment:29, this is better, even without measuring the effect on end-to-end latency, which should also be improved.
comment:8 Changed 20 months ago by
The charts are now available here: https://xpra.org/stats/nodelay-cork/.
comment:9 Changed 19 months ago by
This option can now be enabled on a per-socket basis: ticket:2424#comment:1.
comment:10 Changed 3 months ago by
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/2130
Done in r21517. (note: no support for BSD OS variants - patches welcome)
Results in aggregated TCP packet chunks for large packets (ie: "draw"), including on websockets and SSL.
As per example in ticket:2121#comment:5, the PNG pixel data and the xpra packet metadata no longer require an extra TCP frame:
@maxmylin: Just like #619, this should result in slightly lower bandwidth / better bandwidth utilization, and lower latency. Most noticeable on low-bandwidth setups.