Split from #909.
The best explanation of the changes required can be found in https://www.nvidia.com/docs/IO/40049/TB-04701-001_v02_new.pdf, see 30-Bit Visual on Linux.
We'll need to tell the server we want 10-bit colour, maybe advertise a new YUV or RGB upload mode.
gl_check.py output
With r15015, running xpra/client/gl/gl_check.py
against a 30-bit display I get attachment/ticket/1309/gl_check.txt, which shows:
* blue-size : 10 * red-size : 10 * green-size : 10 * depth : 30
So we can detect support for 30-bit color and 10-bit per channel.
And r15018 handles 30-bit modes with native 30-bit upload: "r210" == "GL_UNSIGNED_INT_2_10_10_10_REV".
r15019 fixes swapped colour red and blue (oops), r15026 allows us to prefer high bit depth "r210" plain rgb encoding if the client is using 10-bit depth rendering. (jpeg and video encodings will still be used for lossy packets).
r15027 shows the bit depth on session info (normal bit depth is 24):
We could probably handle R210 the same way (as "GL_UNSIGNED_INT_2_10_10_10") but since I don't have hardware to test.. this is not supported.
@afarr: FYI, we can handle high color depth displays (only tested on Linux).
shows the bit depth on session info
PS: r15094 fixes opengl rendering which broke because our hacked pygtkglext library is missing the "get_depth" method, OSX clients will not support high bit depths until this is fixed: #1443
See new wiki page: wiki/ImageDepth
Realistically, we won't be able to test this until we get proper hardware for it. And even then, I have no idea what said proper hardware will be.
@antoine - some input as to what we should be testing with would be nice, but I wouldn't hold my breath on us actually getting said equipment if it involves asking for new hardware.
@antoine - some input as to what we should be testing with would be nice, but I wouldn't hold my breath on us actually getting said equipment if it involves asking for new hardware.
You may already have all you need:
More nvidia info here: 10-bit per color support on NVIDIA Geforce GPUs
Actually verifying that you are rendering at 10-bit per colour is a bit harder:
--encodings=rgb
and verify that paint packets come through as "r210" rgb pixel format
Edit: AMD’s 10-bit Video Output Technology seems to indicate that 10-bit color requires a "firepro" workstation card
Updates and fixes:
XPRA_FORCE_HIGH_BIT_DEPTH=1
env var
The test application is ready in #1553, but it's not really easy to use because it requires opengl... virtualgl can't handle the "r210" pixel format, and the software gl renderer doesn't support it either.
So in order to test, I had to run the xpra server against my main desktop with the nvidia driver configured at 10 bpc.
Then connect a client... and the only client I had available for testing was a windows 7 system, and ms windows doesn't do 10 bpc with the consumer cards, so I had to swap cards. Then the monitor it was connected to didn't handle 10 bpc, so I had to swap that. Then the cables were too short. Then I had to make fixes (see this ticket and many other fixes yesterday - bugs you only hit with --use-display
for example...)
TLDR: hard to test!
r16303: the "pixel-depth" option can now be used to force the opengl client to use deep color (use any value higher than 30) - even if the display doesn't claim to render deep color.
ie: running the server with --pixel-depth=30 -d compress
, and a linux opengl client with --pixel-depth=30 --opengl=yes
, I see:
compress: 0.1ms for 499x316 pixels at 0,0 for wid=1 using rgb24 with ratio 1.6% \ ( 615KB to 9KB), sequence 5, client_options={'lz4': 1, 'rgb_format': 'r210'}
Note the "r210" rgb format. Same result if the client is running on a 30-bit display with --pixel-depth=0
(the default)
Whereas if the client runs on a 24-bit display, or if we force disable deep color with --pixel-depth=24
then we see:
compress: 1.4ms for 499x316 pixels at 0,0 for wid=1 using rgb24 with ratio 1.3% \ ( 615KB to 7KB), sequence 3, client_options={'lz4': 1, 'rgb_format': 'RGB'}
Remaining issues:
Updates:
With these changes, it is now much easier to:
For macos, see also #1443
Tested on win32 (no luck) and Linux (OK) as part of #1553, for macos testing: #1443. Closing.
opengl applications running through virtualgl currently require this patch: ticket:1577#comment:2
NVIDIA @ SIGGRAPH 2019: NV to Enable 30-bit OpenGL Support on GeForce/Titan Cards: At long last, NVIDIA is dropping the requirement to use a Quadro card to get 30-bit (10bpc) color support on OpenGL applications; the company will finally be extending that feature to GeForce? and Titan cards as well.
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1309