Using xprax264 and some cython glue code it shouldn't be too hard to get x264 encoded bytes from an rgb24 screen grab.
Code to follow.
adds an x264 encoder/decoder library, cython glue and makes distutils build it
Updated the library: https://github.com/ahuillet/xprax264/
We need to create one encoder per window, ie. one encoder context per window.
init_encoder(width, height) will now return a pointer to an opaque structure (the encoder context). Create one for each window.
Same goes for init_decoder on the client side.
Then, compression is done by passing the context as the first argument to compress_image. Same goes for decompression.
Note that you cannot compress a subpart of the image: the full window must be encoded.
v2 of the patch with support for up to 32 contexts
The patch is a bit ugly, I couldn't find a way of doing dicts (or maps) in cython, so I used python maps to map to the index of a Cython/C array. Also, I had to use (void *) instead of (x264lib_ctx *) to avoid compilation errors. Apart from that, it seems to work.
Note: please update to latest trunk which has moved the actual window drawing code to xpra/window_backing.py
- you may just implement it for PixmapBacking
(gtk2) if you like.
Then add code like this to xpra/scripts/main.py
's encoding section:
try: from xpra.x264 import codec ENCODINGS.append("x264") except: pass
Either to the non is_gtk3()
codepath, or to the common case if drawing is implemented for both.
patch with all the changes to client/server/main
now using proper cython classes to simplify code
many fixes, this almost works
fix for rowstride, very close..
committed in r642, remaining issues:
init(w,h)
? or separate call?)
etc.
Todo:
to try to tell py2exe to invoke Cython on the x264 bindings
r660 re-enables x264 for OR windows - still seems to work here
For reference, this is what one needs to do to build on OSX (from a jhbuild shell), we add Cython, yasm, ffmpeg and x264:
# Cython: curl -O http://cython.org/release/Cython-0.15.1.tar.gz tar -zxvf Cython-0.15.1.tar.gz cd Cython-0.15.1 python ./setup.py install # yasm: curl -O http://www.tortall.net/projects/yasm/releases/yasm-1.2.0.tar.gz tar -zxvf yasm-1.2.0.tar.gz cd yasm-1.2.0 ./configure --prefix=${JHBUILD_PREFIX} --libdir=${JHBUILD_PREFIX}/lib --build=i386-darwin make & make install # ffmpeg: curl -O http://ffmpeg.org/releases/ffmpeg-0.10.2.tar.bz2 tar -jxf ffmpeg-0.10.2.tar.bz2 cd ffmpeg-0.10.2 ./configure --libdir=${JHBUILD_PREFIX}/lib --prefix=${JHBUILD_PREFIX} --enable-shared --disable-static make & make install # x264: curl -O ftp://ftp.videolan.org/pub/x264/snapshots/last_x264.tar.bz2 tar -jxf last_x264.tar.bz2 cd x264-snapshot-* ./configure --libdir=/Users/MacAdmin/gtk/inst/lib --prefix=/Users/MacAdmin/gtk/inst --enable-shared --disable-static --enable-pic make & make_install
Then build xpra as usual:
python setup.py install
For win32, you probably need this patch to Cython's Cython/Distutils/extension.py
:
@@ -16,7 +16,6 @@ except ImportError: warnings = None class Extension(_Extension.Extension): - _Extension.Extension.__doc__ + \ """pyrex_include_dirs : [string] list of directories to search for Pyrex header files (.pxd) (in Unix form for portability)
to avoid this error:
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' in setup.py
Then some magic incantations to tell Visual Studio where to find the libraries and headers, otherwise you get the very unhelpful error:
error: command '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe"' failed with exit status 2
The actual error messages seem to be in fact logged to src/py2exe-xpra.log.
They are a bunch of syntax errors in x264lib.h because the include paths are incorrect.
Latest patch attached to this ticket makes py2exe succeed. The paths to ffmpeg are hardcoded however.
This patch enables x264 building under win32.
Rebased patch on latest rev. It builds fine under win32 but "xpra attach --encoding x264" refuses the x264 argument.
r675 fixes the build for win32.
One troubling issue though: I can connect using x264 from win32 to a Linux server but on the second connection the server will crash hard.. memleak?
A memleak won't trigger a crash, and I'm curious that it is the *server* crashing. The client crashing would be less surprising: can you confirm that the encoder context is *per window* and not *per window per client*?
Naturally it would be easier in the second case.
It crashes on avcodec_close()
which is called from clean_decoder()
in x264lib.c
, itself called from Encoder.clean()
.
We only clean when the client disconnects (via self._on_close
or when the dimensions change).
Is it possible somehow that we end up doing both? del encoders[wid]
is supposed to remove the reference to the encoder.
(gdb) bt #0 0x000000300a664604 in avcodec_close () from /usr/lib64/libavcodec.so.53 #1 0x00007fdff357490b in clean_decoder (ctx=0x7fdfdc0017e0) at xpra/x264/x264lib.c:106 #2 0x00007fdff35715da in __pyx_pf_4xpra_4x264_5codec_6xcoder_2clean (__pyx_v_self= <xpra.x264.codec.Encoder at remote 0x1754e50>, unused=0x0) at xpra/x264/codec.c:719 #3 0x00000032da6dfb13 in call_function (oparg=<optimized out>, pp_stack=0x7fff65856388) at /usr/src/debug/Python-2.7.2/Python/ceval.c:4074 #4 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at /usr/src/debug/Python-2.7.2/Python/ceval.c:2740 #5 0x00000032da6e15a5 in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=0, kws=0x1a30438, kwcount=0, defs=0x0, defcount=0, closure= (<cell at remote 0x195d440>, <cell at remote 0x195dda8>, <cell at remote 0x195dc58>)) at /usr/src/debug/Python-2.7.2/Python/ceval.c:3330 #6 0x00000032da6dfadb in fast_function (nk=<optimized out>, na=0, n=<optimized out>, pp_stack=0x7fff65856578, func= <function at remote 0x195cd70>) at /usr/src/debug/Python-2.7.2/Python/ceval.c:4186 #7 call_function (oparg=<optimized out>, pp_stack=0x7fff65856578) at /usr/src/debug/Python-2.7.2/Python/ceval.c:4111 #8 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at /usr/src/debug/Python-2.7.2/Python/ceval.c:2740 #9 0x00000032da6e0580 in fast_function (nk=<optimized out>, na=1, n=<optimized out>, pp_stack=0x7fff658566b8, func= <function at remote 0x1839f50>) at /usr/src/debug/Python-2.7.2/Python/ceval.c:4176 #10 call_function (oparg=<optimized out>, pp_stack=0x7fff658566b8) at /usr/src/debug/Python-2.7.2/Python/ceval.c:4111 #11 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at /usr/src/debug/Python-2.7.2/Python/ceval.c:2740 #12 0x00000032da6e0580 in fast_function (nk=<optimized out>, na=3, n=<optimized out>, pp_stack=0x7fff658567f8, func= <function at remote 0x183dc08>) at /usr/src/debug/Python-2.7.2/Python/ceval.c:4176 #13 call_function (oparg=<optimized out>, pp_stack=0x7fff658567f8) at /usr/src/debug/Python-2.7.2/Python/ceval.c:4111 #14 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at /usr/src/debug/Python-2.7.2/Python/ceval.c:2740 #15 0x00000032da6e0580 in fast_function (nk=<optimized out>, na=3, n=<optimized out>, pp_stack=0x7fff65856938, func= <function at remote 0x183dcf8>) at /usr/src/debug/Python-2.7.2/Python/ceval.c:4176 #16 call_function (oparg=<optimized out>, pp_stack=0x7fff65856938) at /usr/src/debug/Python-2.7.2/Python/ceval.c:4111 #17 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at /usr/src/debug/Python-2.7.2/Python/ceval.c:2740 #18 0x00000032da6e15a5 in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=3, kws=0x0, kwcount=0, defs=0x7fdff48a1260, defcount=2, closure=0x0) at /usr/src/debug/Python-2.7.2/Python/ceval.c:3330 #19 0x00000032da66dc2c in function_call (func=<function at remote 0x7fdff48a2c08>, arg= (<Protocol(_read_queue=<Queue(unfinished_tasks=14, queue=<collections.deque at remote 0x18fb7c0>, maxsize=5, all_tasks_done=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x1754cd0>, acquire=<built-in method acquire of thread.lock object at remote 0x1754cd0>, _Condition__waiters=[], release=<built-in method release of thread.lock object at remote 0x1754cd0>) at remote 0x195b110>, mutex=<thread.lock at remote 0x1754cd0>, not_full=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x1754cd0>, acquire=<built-in method acquire of thread.lock object at remote 0x1754cd0>, _Condition__waiters=[], release=<built-in method release of thread.lock object at remote 0x1754cd0>) at remote 0x195b0d0>, not_empty=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x1754cd0>, acquire=<built-in method acquire of thread.lock object at remote 0x1754cd0>, _Condition__waiters=[], release=<built-in method release of thread.lock object at remote 0x1754c...(truncated), kw=0x0) at /usr/src/debug/Python-2.7.2/Objects/funcobject.c:526
that's fixed in r678
Trying to run under Windows:
cannot load x264: cannot import name codec
You need to ensure that the paths in the build files are correct, if they are then py2exe will generate a loader for "codec.pyd" and include all the required DLLs itself.
log from a successful py2exe build
The important parts from the log file above:
creating python loader for extension 'xpra.x264.codec' (E:\xpra\src\xpra\x264\codec.pyd -> xpra.x264.codec.pyd)
*** copy extensions *** (..) copying E:\xpra\src\xpra\x264\codec.pyd -> E:\xpra\src\dist\xpra.x264.codec.pyd
*** copy dlls *** (..) copying Z:\ffmpeg-win32-shared\bin\avcodec-54.dll -> E:\xpra\src\dist (..) copying Z:\ffmpeg-win32-shared\bin\swscale-2.dll -> E:\xpra\src\dist (..) copying Z:\ffmpeg-win32-shared\bin\avutil-51.dll -> E:\xpra\src\dist
Assuming you have those, the win32 build should have x264 support.
x264 under windows now works.
this is enough for the 0.2 release, the remaining issues are in #110
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/94