#24 closed enhancement (fixed)
network read loop is highly inefficient
Reported by: | Antoine Martin | Owned by: | Antoine Martin |
---|---|---|---|
Priority: | major | Milestone: | 0.0.7.x |
Component: | client | Version: | 0.0.7.26 |
Keywords: | Cc: |
Description
Whenever we read a bit of data from the socket (sometimes as small as just one character!) we schedule the main loop to call _handle_read
. When it actually fires there may only be (sometimes less than) one real packet waiting there, yet it will fire as many times as the socket read loop originally fired.
We want to ensure we don't schedule it again if it is already pending, this should save a lot of context switches and reduce load significantly. Using an atomic loop counter should probably be enough to achieve this.
Change History (3)
comment:1 Changed 9 years ago by
Resolution: | → fixed |
---|---|
Status: | new → closed |
comment:2 Changed 9 years ago by
Version: | → 0.0.7.26 |
---|
comment:3 Changed 6 weeks ago by
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/24
Note: See
TracTickets for help on using
tickets.
r167 improves this by not scheduling unless necessary (ie: not if the
_handle_read
has already been scheduled and not run yet).This uses optimistic locking and may still fire unnecessarily in rare cases (as before - so no harm done).
Python lacks atomic counters (what an omission!) so I added a simple
CachedCounter
class.The end result is better as we avoid many thread context switches (note how the
Queueing main thread call
happens once rather than for every read), however the data is still passed to the_handle_read
in dribs and drabs:Which means that each of these character(s) end up as a single element on the read queue! Waste of space.
The easiest way to change that would be to change the protocol layer to have clear delimiters for data so that it could be buffered until ready for consumption (this is why
WinSwitch
uses a line protocol where each command is separated by a carriage return...). This would mean breaking compatibility with existing clients though... so unlikely to happen at present unless this can be enabled selectively for clients that support it via the "capabilities" dict.Anyway, most packets are actually much bigger (ie: damage requests) so this should not be a huge problem. Except maybe for pointer motion events?
Closing as this is probably enough for now.