#132 closed defect (fixed)
idle_add is a latency bottleneck in damage_to_data
Reported by: | Antoine Martin | Owned by: | Antoine Martin |
---|---|---|---|
Priority: | major | Milestone: | 0.4 |
Component: | server | Version: | 0.3.x |
Keywords: | Cc: |
Description (last modified by )
When we get many small areas in damage requests, we may end up scheduling hundreds of calls to idle_add from damage_to_data
which causes the latency to shoot right up.
Solutions:
- Run the whole
damage_to_data
in one idle_add call - or even run it directly fromdamage
(viadamage_now
orsend_delayed
). This would reduce the number of idle_add calls at the cost of making one longer idle_add call... - Keep track of how many idle_add calls are pending and use this measure to update the batching delay. (but this may still not allow the many small regions to be coalesced..)
- Prefer full frame refreshes over many small regions by increasing
packet_cost
and/or decreasingpixels_threshold
Or a combination of the above? Or??
Once done, we can deal with #135
Attachments (3)
Change History (10)
comment:1 Changed 9 years ago by
Description: | modified (diff) |
---|---|
Status: | new → accepted |
comment:2 Changed 9 years ago by
comment:3 Changed 9 years ago by
A simple:
time.sleep(0)
added in r884 ensures we don't queue too many idle_add calls before giving them a chance to run.
A better solution is probably to remove the threading from damage_to_data and to keep track of the idle_add latency as a measure of how smoothly the system is running, and batch more when it goes too high?
Changed 9 years ago by
Attachment: | xpra-unthreadeddamage.patch added |
---|
removes threading from damage codepath
Changed 9 years ago by
Attachment: | xpra-unthreadeddamage-addlatencymeasurements.patch added |
---|
updated patch which dumps the damage latency at various points in the pipeline
comment:4 Changed 9 years ago by
With this code in _get_rgb_rawdata
pixbuf = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB, False, 8, width, height) log.info("get_rgb_rawdata(..) creating pixbuf object took %s ms", int(1000*(time.time()-start))) pixbuf.get_from_drawable(pixmap, pixmap.get_colormap(), x, y, 0, 0, width, height) log.info("get_rgb_rawdata(..) pixbuf.get_from_drawable took %s ms", int(1000*(time.time()-start))) raw_data = pixbuf.get_pixels() log.info("get_rgb_rawdata(..) pixbuf.get_pixels took %s ms", int(1000*(time.time()-start))) rowstride = pixbuf.get_rowstride() log.info("get_rgb_rawdata(..) took %s ms", int(1000*(time.time()-start)))
We can see:
get_rgb_rawdata(..) creating pixbuf object took 0 ms get_rgb_rawdata(..) pixbuf.get_from_drawable took 182 ms get_rgb_rawdata(..) pixbuf.get_pixels took 183 ms get_rgb_rawdata(..) took 184 ms
So the expensive call is pixbuf.get_from_drawable()
in gtk.
Solutions:
- use the X11 pixmap directly/unwrapped (tricky?)
- ???
Changed 9 years ago by
Attachment: | xpra-protocol-addlatencymeasurements2.patch added |
---|
patch for protocol.py so the latency reported is when the packet has been sent by the network layer
comment:5 Changed 9 years ago by
r886 commits the unthreaded damage code and adds the latency measurement via "xpra info", see changeset commit message for details
comment:6 Changed 9 years ago by
Resolution: | → fixed |
---|---|
Status: | accepted → closed |
good enough in trunk with the latency minimization code from #153
comment:7 Changed 6 weeks ago by
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/132
Using this simple patch:
And running the xterm performance test, we can clearly see the idle_add latency shooting right up:
Then suddenly dropping back down... now need to figure out why!