xpra icon
Bug tracker and wiki

This bug tracker and wiki are being discontinued
please use https://github.com/Xpra-org/xpra instead.


Changes between Initial Version and Version 1 of Ticket #520, comment 2


Ignore:
Timestamp:
02/18/14 09:50:50 (7 years ago)
Author:
Antoine Martin
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #520, comment 2

    initial v1  
    1010* #517 regressions: I believe this load balancing code should be safe from leaks, but I cannot be certain. Easiest thing to do is resize a fast updating window (and if possible, do that on two contexts that live on different cards..), which should cause many encoder re-init: destroying and creating new NVENC contexts for the new window sizes. The GPU's free memory should remain relatively constant throughout.
    1111* utilization: can we get close to 100% of encoding contexts used? (32 contexts per card.. this will take a lot of clients and windows)
     12* start multiple servers or use proxy encoding (#504): does the code still manage to allocate encoding contexts properly? (and fallback/retry as needed)
    1213* how is the initial connection delay: having to initialize the CUDA context *after* the client connects means that there will be an extra delay, even more so when there are multiple cards to probe. Is it bearable?
    1314etc..