Why?:
Attached is a test patch (does not even compile) which shows how to get started. I don't have time to take this further right now..
work in progress yuv codec
I think the "co" should also be split from the "dec".
splitting encoder from decoders is done (that part was very easy) in r3265
better patch - no longer using a yuvlib, all in cython
The plan is: separate video backends into modules -
csc_* for colorspace conversion (csc_swscale, csc_nvcuda coming one day) enc_* for encoding (enc_x264) dec_* for decoding (dec_avcodec)
The patches on the C parts are done, with an improved API, and kept at https://github.com/ahuillet/video_backends.
Now we need to re-do the .pyx interfaces to make this work again.
Quirks:
Memory management:
We also need to deal with reads beyond the size of the buffer caused by swscale reading 8 (or more?) bytes at a time - valgrind output:
==8159== Thread 6: ==8159== Invalid read of size 8 ==8159== at 0xF7580E1: ??? (in /usr/lib64/libswscale.so.2.1.101) ==8159== by 0xF750038: ??? (in /usr/lib64/libswscale.so.2.1.101) ==8159== by 0xF756465: sws_scale (in /usr/lib64/libswscale.so.2.1.101) ==8159== by 0xFFFC441: csc_image_rgb2yuv (x264lib.c:502) ==8159== by 0xFFF71B5: __pyx_pw_4xpra_6codecs_4x264_7encoder_7Encoder_25compress_image (encoder.c:3033) ==8159== by 0x35B1EDD280: PyEval_EvalFrameEx (ceval.c:4098) ==8159== by 0x35B1EDCEF0: PyEval_EvalFrameEx (ceval.c:4184) ==8159== by 0x35B1EDDCBE: PyEval_EvalCodeEx (ceval.c:3330) ==8159== by 0x35B1E6D925: function_call (funcobject.c:526) ==8159== by 0x35B1E49C0D: PyObject_Call (abstract.c:2529) ==8159== by 0x35B1EDA08A: PyEval_EvalFrameEx (ceval.c:4411) ==8159== by 0x35B1EDDCBE: PyEval_EvalCodeEx (ceval.c:3330) ==8159== Address 0x1acea03c is 630,732 bytes inside a block of size 630,736 alloc'd ==8159== at 0x4A0887C: malloc (vg_replace_malloc.c:270) ==8159== by 0x35B3A287BB: XGetImage (GetImage.c:82) ==8159== by 0x158E52C6: __pyx_pw_4xpra_3x11_7gtk_x11_12gdk_bindings_13PixmapWrapper_3get_image (gdk_bindings.c:6555) ==8159== by 0x35B1E49C0D: PyObject_Call (abstract.c:2529) ==8159== by 0x35B1EDA08A: PyEval_EvalFrameEx (ceval.c:4411) ==8159== by 0x35B1EDCEF0: PyEval_EvalFrameEx (ceval.c:4184) ==8159== by 0x35B1EDDCBE: PyEval_EvalCodeEx (ceval.c:3330) ==8159== by 0x35B1EDC2B2: PyEval_EvalFrameEx (ceval.c:4194) ==8159== by 0x35B1EDDCBE: PyEval_EvalCodeEx (ceval.c:3330) ==8159== by 0x35B1EDC2B2: PyEval_EvalFrameEx (ceval.c:4194) ==8159== by 0x35B1EDDCBE: PyEval_EvalCodeEx (ceval.c:3330) ==8159== by 0x35B1EDC2B2: PyEval_EvalFrameEx (ceval.c:4194)
New backend infrastructure WIP at https://github.com/ahuillet/xpra/commits/
Things to do :
latest work in progress
Please pull the latest changes I've sent, updated list of remaining issues (help needed - although not all of these issues need to be dealt with before we merge into trunk, most of them will need doing at some point):
ImageWrapper
to global location (no longer a server-only class)
free()
,s o no real need for the currently very ugly code: merge with XImageWrapper code which does do a free()
Also, maybe we want/have to use separate memaligned buffers for each plane?
FAST_BILINEAR
if quality=0 and speed=100, do re-init in set_quality/set_speed - which could be combined as: configure(quality, speed)
dec_avcodec
should be able to handle vpx: make the decoder export the list of codec formats it handles (x264, vpx, etc)
rgb_format
to pixel_format
Pull - from where?
Mostly done in:
Not closing this ticket yet, what is left to do:
ImageWrapper
and maybe refactor it with the XImageWrapper
vpx_img_wrap
for vpx)
And maybe also:
More improvements::
and lots of cleanups in between.
Which only leaves needing fixing for this ticket:
OpenGL
backing (ideally, having the ability to use RGBP pixel data directly, handling scaling would also be nice)
Other things we can deal with later (see also lists in previous comments):
codec_spec
is more or less accurate - finding out later when we try to use it is more expensive)
and... scaling brought back an old bug:
non-existing PPS 0 referenced
After spending a lot of time debugging and inspecting the server-side code (which looks fine), I managed to reproduce it with client side debugging switched on and found:
paint_with_video_decoder: window dimensions have changed from (421, 374) to (420, 374) paint_with_video_decoder: new <type 'xpra.codecs.dec_avcodec.decoder.Decoder'>(420,374,YUV420P) paint_with_video_decoder: info={'width': 420, 'colorspace': 'YUV420P', 'type': 'x264', 'height': 374} [h264 @ 0x7f2b8836af00] non-existing PPS 0 referenced [h264 @ 0x7f2b8836af00] decode_slice_header error [h264 @ 0x7f2b8836af00] no frame! Error while decoding frame 2013-06-10 19:04:51,472 draw error Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/xpra/client/ui_client_base.py", line 1004, in _do_draw window.draw_region(x, y, width, height, coding, data, rowstride, packet_sequence, options, [record_decode_time]) File "/usr/lib64/python2.7/site-packages/xpra/client/client_window_base.py", line 258, in draw_region self._backing.draw_region(x, y, width, height, coding, img_data, rowstride, options, callbacks) File "/usr/lib64/python2.7/site-packages/xpra/client/window_backing_base.py", line 348, in draw_region self.paint_with_video_decoder(x264_Decoder, "x264", img_data, x, y, width, height, options, callbacks) File "/usr/lib64/python2.7/site-packages/xpra/client/window_backing_base.py", line 258, in paint_with_video_decoder coding, len(img_data), width, height, options)) Exception: paint_with_video_decoder: x264 decompression error on 1044 bytes of picture data for 420x374 pixels, \ options={'quality': 29, 'frame': 2, 'speed': 100, 'csc': 'YUV420P'}
For the record, what is happening here is that we were sending the raw dimensions of the XImageWrapper
we encode from as being the dimensions we can paint with, and these new dimensions could get clipped by x264's size restrictions (even dimensions only) to the same size as the current encoder pipeline (ie: when resizing a window to a size one pixel shorter) which means the current encoding pipeline was still valid, but this caused the client side code to erroneously think that it needed a new decoder pipeline to deal with the packet... (cleaning the old pipeline, causing the new video packet to reference a frame that has been discarded).
A somewhat ugly solution is to pass the real dimensions via client_options, see attached patch. A better solution will be committed.
use the correct video size when decoding (hackish solution)
The correct solution for the client decoding error was merged in r3607 (note: this was not caused by scaling after all)
Also merged a workaround for GL window's lack of RGB support in r3612 (+ r3617 for OR windows)
As for finding the encoder limits, we'll deal with that when we find problems. (note: this needs to go in the codec_spec rather than finding out too late when we try to instantiate a new pipeline)
This is enough for now, will follow up in #350, etc
Note: all the csc/encoder info is now exported via xpra info:
window[2].csc=swscale window[2].csc.dst_format=YUV444P window[2].csc.dst_height=710 window[2].csc.dst_width=1364 window[2].csc.flags=BICUBLIN | SWS_ACCURATE_RND window[2].csc.frames=6 window[2].csc.pixels_per_second=70503532 window[2].csc.src_format=BGRX window[2].csc.src_height=710 window[2].csc.src_width=1364 window[2].csc.total_time_ms=82 window[2].encoder=x264 window[2].encoder.frames=6 window[2].encoder.height=710 window[2].encoder.pixels_per_second=23144142 window[2].encoder.preset=medium window[2].encoder.profile=high444 window[2].encoder.src_format=YUV444P window[2].encoder.total_time_ms=251 window[2].encoder.width=1364
And we also have the scores for figuring out what other encoding/csc options were available and how they ranked:
window[3].pipeline_option[0].csc=codec_spec(xpra.codecs.csc_swscale.colorspace_converter.ColorspaceConverter) window[3].pipeline_option[0].encoder=codec_spec(xpra.codecs.enc_x264.encoder.Encoder) window[3].pipeline_option[0].format=YUV444P window[3].pipeline_option[0].score=48 window[3].pipeline_option[1].csc=codec_spec(xpra.codecs.csc_swscale.colorspace_converter.ColorspaceConverter) window[3].pipeline_option[1].encoder=codec_spec(xpra.codecs.enc_x264.encoder.Encoder) window[3].pipeline_option[1].format=YUV422P window[3].pipeline_option[1].score=44 window[3].pipeline_option[2].csc=codec_spec(xpra.codecs.csc_swscale.colorspace_converter.ColorspaceConverter) window[3].pipeline_option[2].encoder=codec_spec(xpra.codecs.enc_x264.encoder.Encoder) window[3].pipeline_option[2].format=YUV420P window[3].pipeline_option[2].score=26
How they are ranked will depend on the current quality and speed settings which is also available:
window[2].quality.cur=62 window[2].speed.cur=34
(setting correct milestone the work was completed in)
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/328