Xpra: Ticket #1700: faster damage processing - bandwidth constraints handling
Follow up from #999. These classes are becoming complicated and slow.
TODO:
- run profiling again
- merge video source? (we never use window source on its own anyway)
- support multiple video regions?
- cythonize, use strongly typed and faster "deque":Ring buffers in Python/Numpy
- pre-calculate more values: ECU "engine map" like
- more gradual refresh when under bandwidth constraints and at low quality: the jump from lossy to lossless can use up too much bandwidth, maybe refresh first at 80% before doing true lossless
- use more bandwidth? (macos client could use more quality?)
- slowly updating windows should be penalized less
- don't queue more frames for encoding after a congestion event (ok already?)
- maybe keep track of the refresh compressed size?
See also #920: some things could be made faster on the GPU..
Sun, 26 Nov 2017 10:37:58 GMT - Antoine Martin: status, description, summary changed
- status
changed from new to assigned
- description
modified (diff)
- summary
changed from faster damage processing to faster damage processing - bandwidth constraints handling
Sun, 26 Nov 2017 12:02:41 GMT - Antoine Martin: description changed
- description
modified (diff)
Sun, 26 Nov 2017 14:14:55 GMT - Antoine Martin: description changed
- description
modified (diff)
See also #1761
Sat, 17 Feb 2018 07:18:12 GMT - Antoine Martin:
See also ticket:1769#comment:1 : maybe we should round up all screen updates to ensure we can always use color subsampling and video encoders? Or only past a certain size to limit the cost?
Thu, 08 Mar 2018 17:59:55 GMT - Antoine Martin: attachment set
- attachment
set to encoding-selection.png
profiling encoding selection
Thu, 08 Mar 2018 18:01:58 GMT - Antoine Martin:
After much profiling, it turns out that encoding selection is actually pretty fast already:
And so we're better off spending extra time choosing the correct encoding, instead of trying to save time there: r18669.
Other micro improvements: r18667, r18668
See also ticket:1299#comment:6, we seem to be processing the damage events fast enough (~0.25ms for do_damage
), but maybe we're scheduling things too slowly when we get those damage storms?
Sat, 10 Mar 2018 09:37:37 GMT - Antoine Martin: milestone changed
- milestone
changed from 2.3 to 3.0
For the record, I've used this command to generate the call graphs:
python2 ./tests/scripts/pycallgraph -i damage -- start --start-child="xterm -ls" --no-daemon
Minor related fix: r18685.
Re-scheduling as the profiling has shown that this is not a huge overhead after all.
Wed, 28 Mar 2018 05:15:46 GMT - Antoine Martin: milestone changed
- milestone
changed from 3.0 to 3.1
Wed, 20 Mar 2019 05:06:15 GMT - Antoine Martin: milestone changed
- milestone
changed from 3.1 to 4.0
Milestone renamed
Wed, 12 Feb 2020 05:16:38 GMT - Antoine Martin: milestone changed
- milestone
changed from 4.0 to 5.0
Sat, 23 Jan 2021 05:31:26 GMT - migration script:
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1700