See also #451 (libva accelerated encoding)
Pointers:
Worth mentioning:
stub libva files to get going
More pointers (for libva - not nvenc):
stubs for implementing an nvenc encoder
Attached is a gdb command file for tracing NVENC calls made by the sample app from the SDK.
Use:
$ gdb --args ./nvEncoder -config=config.txt -outFile=bba.h264 inFile=../YUV/1080p/HeavyHandIdiot.3sec.yuv (gdb) b NvEncodeAPICreateInstance Function "NvEncodeAPICreateInstance" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (NvEncodeAPICreateInstance) pending. (gdb) r Starting program: /home/ahuillet/nvenc_3.0_sdk/Samples/nvEncodeApp/nvEncoder -config=config.txt -outFile=bba.h264 inFile=../YUV/1080p/HeavyHandIdiot.3sec.yuv [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". >> GetNumberEncoders() has detected 8 CUDA capable GPU device(s) << [ GPU #0 - < GRID K1 > has Compute SM 3.0, NVENC Available ] [ GPU #1 - < GRID K1 > has Compute SM 3.0, NVENC Available ] [ GPU #2 - < GRID K1 > has Compute SM 3.0, NVENC Available ] [ GPU #3 - < GRID K1 > has Compute SM 3.0, NVENC Available ] [ GPU #4 - < GRID K1 > has Compute SM 3.0, NVENC Available ] [ GPU #5 - < GRID K1 > has Compute SM 3.0, NVENC Available ] [ GPU #6 - < GRID K1 > has Compute SM 3.0, NVENC Available ] [ GPU #7 - < GRID K1 > has Compute SM 3.0, NVENC Available ] Breakpoint 1, 0x00007ffff6d179a0 in NvEncodeAPICreateInstance () from /lib64/libnvidia-encode.so.1 (gdb) n (gdb) n (gdb) source ~/trace_nvenc Breakpoint 2 at 0x7ffff6d175f0 ...
Then continue the execution. Results found in attachment/ticket/370/nvenc-trace.txt (crazy long ticket comment cleaned up by totaam)
After cleanup, this is the sequence of calls for setup:
nvEncOpenEncodeSessionEx nvEncGetEncodeGUIDCount nvEncGetEncodeGUIDs nvEncGetEncodePresetCount nvEncGetEncodePresetGUIDs nvEncGetEncodePresetConfig nvEncGetEncodeProfileGUIDCount nvEncGetEncodeProfileGUIDs nvEncGetInputFormatCount nvEncGetInputFormats nvEncInitializeEncoder
Then, create the input and output buffers:
nvEncCreateInputBuffer nvEncCreateBitstreamBuffer nvEncRegisterAsyncEvent [repeat N times] nvEncRegisterAsyncEvent
Then:
> NVENC Encoder[0] configuration parameters for configuration #0 > GPU Device ID = 0 > Frames = 0 frames > ConfigFile = (null) > Frame at which 0th configuration will happen = 0 > maxWidth,maxHeight = [1920,1080] > Width,Height = [1920,1080] > Video Output Codec = 4 - H.264 Codec > Average Bitrate = 6000000 (bps/sec) > Peak Bitrate = 0 (bps/sec) > Rate Control Mode = 1 - VBR (Variable Bitrate) > Frame Rate (Num/Denom) = (30000/1001) 29.9700 fps > GOP Length = 30 > Set Initial RC QP = 0 > Initial RC QP (I,P,B) = I(0), P(0), B(0) > Number of B Frames = 2 > Display Aspect Ratio X = 1920 > Display Aspect Ratio Y = 1080 > Video codec profile = 100 > Video codec Level = 0 > FieldEncoding = 0 > Number slices per Frame = 1 > Encoder Preset = 3 - High Quality (HQ) Preset > NVENC API Interface = 2 - CUDA Input Filesize: 230227968 bytes [ Source Input File ] = "../YUV/1080p/HeavyHandIdiot.3sec.yuv [ # of Input Frames ] = 74 ** Start Encode <../YUV/1080p/HeavyHandIdiot.3sec.yuv>, Frames [0,74] ** Loading Frames [0,73] into system memory queue (74 frames) nvEncReconfigureEncoder Encoding Frames [0,73] and the actual encoding process, probably with async trace nvEncCreateInputBuffer nvEncCreateBitstreamBuffer nvEncRegisterAsyncEvent nvEncCreateInputBuffer nvEncCreateBitstreamBuffer nvEncRegisterAsyncEvent nvEncCreateInputBuffer nvEncCreateBitstreamBuffer nvEncRegisterAsyncEvent nvEncCreateInputBuffer nvEncCreateBitstreamBuffer nvEncRegisterAsyncEvent nvEncCreateInputBuffer nvEncCreateBitstreamBuffer nvEncRegisterAsyncEvent nvEncCreateInputBuffer nvEncCreateBitstreamBuffer nvEncRegisterAsyncEvent nvEncCreateInputBuffer
r4328 adds the stub encoder which successfully initializes nvenc (and nothing else yet..) and more: r4329 + r4330
Lots more (too many changesets to list, see r4349 and earlier)
We now get valid h264 data out of it and have tests in and out.
What still needs to be done:
GPU
s
NV12_TILED64x16
and 'YUV444_TILED64x16
'
Preferably on the GPU to save CPU (see #384)
functionList
and nvenc encoder context) and the encoder proper (may not even need to be cython?)
With correct padding, r4375 seems to work - though I have seen some of those:
[h264 @ 0x7fa4687d4e20] non-existing PPS 0 referenced [h264 @ 0x7fa4687d4e20] decode_slice_header error
On the client side...
When we instantiate the client-side decoder with the window size (rounded down to even size) whereas the data we get from nvenc has dimensions rounded up to 32... Not sure if it is a problem, or if we can/should send the actual encoder size to the client. Thinking more about it, the vertical size must be rounded up to 32 otherwise nvenc does not find the U and V planes where it wants them... but the horizontal size I am not so sure (as it seemed to work before without padding)
The padding could be useful when resizing a window: we don't need (at least not server side..) a full encoder re-init unless the new size crosses one of the 32-padded boundaries.
Note: it looks like the buffer formats advertised as being supported come in lists of bitmasks (the docs claims it is a plain list) - we use NV12_PL
abandoned work on adding stride attributes to csc so nvenc can specify the padding to 32 generically
abandoned work on adding stride attributes to csc so nvenc can specify the padding to 32 generically (again with missing server file)
use two buffers CUDA side so we can use a kernel to copy (and convert) from one to the other
use pycuda to remove lots of code... except this does not work because we need a context pointer for nvenc :(
"working" pycuda version with an empty kernel
with kernel doing something - causes crashes..
With attachment/ticket/370/nvenc-pycuda-with-kernel2.patch, not too much work is left:
The TLS issues still need resolving:
SIGINT
?)
Other things we may want to do:
gpuGetMaxGflopsDeviceId
?
prepared_call
to speed up kernel invocation? (meh - not much to save there)
use (py)cuda to copy from input buffer (already in NV12 format) to output buffer (nvenc buffer)
r4414 adds the pycuda code (simplifies things) and does the BGRA to NV12 CSC using a custom CUDA kernel. Total encoding time is way down.
New issues:
max_block_sizes
, max_grid_sizes
and max_threads_per_block
mem_alloc
instead of mem_alloc_pitch
- and maybe use that for smaller areas? (where the padding becomes expensive)
mod = driver.module_from_file(filename)
Here's how I run the server when testing:
PATH=$PATH:/usr/local/cuda-5.5/bin/ \ LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64 \ XPRA_NVENC_DEBUG=1 \ XPRA_DAMAGE_DEBUG=1 \ XPRA_VIDEOPIPELINE_DEBUG=1 \ XPRA_ENCODER_TYPE=nvenc \ xpra start :10
For building, here is the nvenc.pc
pkgconfig file I've used on Fedora 19:
prefix=/opt/nvenc_3.0_sdk exec_prefix=${prefix} core_includedir=${prefix}/Samples/core/include api_includedir=${prefix}/Samples/nvEncodeApp/inc libdir=/usr/lib64/nvidia Name: nvenc Description: NVENC Version: 1.0 Requires: Conflicts: Libs: -L${libdir} -lnvidia-encode Cflags: -I${core_includedir} -I${api_includedir}
Note: this refers to unversioned libraries, which you may need to create, here for a 64-bit build:
cd /usr/lib64/nvidia/ ln -sf libnvidia-encode.so.1 libnvidia-encode.so ln -sf libcuda.so.1 libcuda.so #etc.. cd /usr/lib64 ln -sf nvidia/libcuda.so ./
(or you can add the version to the pkgconfig file)
trace from comment:3
Instructions for installing NVENC support from scratch on Fedora 19:
dmesg
for warnings/errors)
sudo sh cuda_*.run -override-compiler
Do install CUDA, you can skip the rest, you *must* tell it not to install the broken drivers it wants to install.
export PATH=/usr/bin:/bin:/usr/local/cuda/bin/ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
Be very careful not to place cuda ahead of the regular LD_LIBRARY_PATH
as this can cause big problems with some libraries (ie: libopencl)
/opt/
nvenc.pc
and cuda.pc
(see 384#comment:3) pkgconfig files and add unversioned cuda and nvidia-encode libraries as per comment:10
/opt
, so I chose to symlink it:
ln -sf /usr/local/cuda /opt
/dev/nvidia*
devices:
rm -f /dev/nvidia* # Count the number of NVIDIA controllers found. N3D=`/sbin/lspci | grep -i NVIDIA | grep "3D controller" | wc -l` NVGA=`/sbin/lspci | grep -i NVIDIA | grep "VGA compatible controller" | wc -l` N=`expr $N3D + $NVGA - 1` for i in `seq 0 $N`; do mknod -m 666 /dev/nvidia$i c 195 $i; done mknod -m 666 /dev/nvidiactl c 195 255
Finally, you can test that xpra builds with cuda/nvenc support:
./setup.py --with-nvenc --with-csc_nvcuda build
And that you can run the cuda/nvenc tests:
mkdir tmp && cd tmp cp -apr ../tests ./ PYTHONPATH=. ./tests/xpra/codecs/test_csc_nvcuda.py PYTHONPATH=. ./tests/xpra/codecs/test_nvenc.py
Strangely enough, the test encoder fails on a GTX 760 and not with a graceful error:
$ gdb ./nvEncoder (..) <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /opt/nvenc_3.0_sdk/Samples/nvEncodeApp/nvEncoder...(no debugging symbols found)...done. (gdb) break OpenEncodeSession Function "OpenEncodeSession" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (OpenEncodeSession) pending. (gdb) run -configFile=HeavyHand_1080p.txt -outfile=HeavyHandIdiot.3sec.264 Starting program: /opt/nvenc_3.0_sdk/Samples/nvEncodeApp/./nvEncoder -configFile=HeavyHand_1080p.txt -outfile=HeavyHandIdiot.3sec.264 [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". >> GetNumberEncoders() has detected 1 CUDA capable GPU device(s) << [ GPU #0 - < GeForce GTX 760 > has Compute SM 3.0, NVENC Available ] >> InitCUDA() has detected 1 CUDA capable GPU device(s)<< [ GPU #0 - < GeForce GTX 760 > has Compute SM 3.0, Available NVENC ] >> Select GPU #0 - < GeForce GTX 760 > supports SM 3.0 and NVENC [New Thread 0x7ffff5bce700 (LWP 16417)] Program received signal SIGSEGV, Segmentation fault. 0x0000000000000000 in ?? () (gdb) bt #0 0x0000000000000000 in ?? () #1 0x000000000040b12f in CNvEncoder::OpenEncodeSession(int, char const**, unsigned int) () #2 0x000000000040dcb2 in CNvEncoderH264::EncoderMain(EncoderGPUInfo, EncoderAppParams, int, char const**) () #3 0x0000000000401f7b in main ()
What's even more strange is that our test code fails even earlier:
Traceback (most recent call last): File "./tests/xpra/codecs/test_nvenc.py", line 136, in <module> main() File "./tests/xpra/codecs/test_nvenc.py", line 128, in main test_encode_one() File "./tests/xpra/codecs/test_nvenc.py", line 17, in test_encode_one test_encoder(encoder_module) File "/home/spikesdev/src/tmp/tests/xpra/codecs/test_encoder.py", line 62, in test_encoder e.init_context(actual_w, actual_h, src_format, encoding, 20, 0, options) File "encoder.pyx", line 1179, in xpra.codecs.nvenc.encoder.Encoder.init_context (xpra/codecs/nvenc/encoder.c:5739) File "encoder.pyx", line 1217, in xpra.codecs.nvenc.encoder.Encoder.init_cuda (xpra/codecs/nvenc/encoder.c:6686) File "encoder.pyx", line 1232, in xpra.codecs.nvenc.encoder.Encoder.init_nvenc (xpra/codecs/nvenc/encoder.c:6813) File "encoder.pyx", line 1649, in xpra.codecs.nvenc.encoder.Encoder.open_encode_session (xpra/codecs/nvenc/encoder.c:12790) File "encoder.pyx", line 1102, in xpra.codecs.nvenc.encoder.raiseNVENC (xpra/codecs/nvenc/encoder.c:4935) Exception: getting API function list - returned 15: This indicates that an invalid struct version was used by the client.
I am pretty sure that when I tested on a GTX 450, I got past this point and it failed when creating the context instead (since that card does not support nvenc), that's why there is the XPRA_NVENC_FORCE
flag in the code.
Edit: this is a problem with the newer drivers which are incompatible with NVENC SDK v3. (SDK v2 works though!)
changes needed to build against the NVENC SDK version 2
example pkgconfig file for NVENC SDK version 2
updated (smaller) patch to apply on top of r4620
As of r4621 the code supports both SDK v2 and v3, use whichever works with your current driver version.
Took me a while to figure this out:
Looks like nvidia forgot to test backwards compatibility with their "upgrade".
r4651 makes V3 the default again - "newer is better", right?
Anyway, to install 3.19 on a system with kernel 3.10 or newer (ie: Fedora 19) is a PITA:
sudo yum remove xorg-x11-drv-nvidia xorg-x11-drv-nvidia-libs akmod-nvidia kmod-nvidia
sudo sh NVIDIA-Linux-x86_64-319.49.run
which will fail at the DKMS stage if building against a kernel version 3.11 or newer..
nv-linux.h
in /var/lib/dkms/nvidia/319.49/source/
. The quick and dirty way:
sed -i -e 's/#define NV_NUM_PHYSPAGES num_physpages/#define NV_NUM_PHYSPAGES get_num_physpages/g' nv-linux.h
sudo dkms install -m nvidia -v 319.49
sudo nvidia-xconfig
sudo service gdm restart
Important fix in r4652 which will need to be backported to v0.10.x
Remaining tasks for nvenc:
max_block_sizes
, max_grid_sizes
and max_threads_per_block
uint32_t separateColourPlaneFlag #[in]: Set to 1 to enable 4:4:4 separate colour planes
)
mod = driver.module_from_file(filename)
gpuGetMaxGflopsDeviceId
: max_gflops = device_properties.multiProcessorCount * device_properties.clockRate;
At the moment, running out of contexts does this:
2013-11-05 14:40:58,590 setup_pipeline failed for (65, None, 'BGRX', codec_spec(nvenc)) Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/xpra/server/window_video_source.py", line 605, in setup_pipeline self._video_encoder.init_context(enc_width, enc_height, enc_in_format, encoder_spec.encoding, quality, speed, self.encoding_options) File "encoder.pyx", line 1291, in xpra.codecs.nvenc.encoder.Encoder.init_context (xpra/codecs/nvenc/encoder.c:5883) File "encoder.pyx", line 1329, in xpra.codecs.nvenc.encoder.Encoder.init_cuda (xpra/codecs/nvenc/encoder.c:6830) File "encoder.pyx", line 1344, in xpra.codecs.nvenc.encoder.Encoder.init_nvenc (xpra/codecs/nvenc/encoder.c:6957) File "encoder.pyx", line 1828, in xpra.codecs.nvenc.encoder.Encoder.open_encode_session (xpra/codecs/nvenc/encoder.c:13775) File "encoder.pyx", line 1203, in xpra.codecs.nvenc.encoder.raiseNVENC (xpra/codecs/nvenc/encoder.c:5070) Exception: opening session - returned 2: This indicates that devices pass by the client is not supported. 2013-11-05 14:40:58,593 error processing damage data: failed to setup a video pipeline for h264 encoding with source format BGRX
Everything about nvenc is now on the wiki here
Updates:
compilation took 4124.1ms
More details edited in comment:18, this is good enough for testing.
At this point the encoder should work and give us the decent quality (we need YUV444P for best quality support) with much lower latency, it also supports efficient scaling.
Please test it and try to break it. (please read Using NVENC first) Trying different resolutions, types of clients, etc.. Measuring fps, with and without, server load, bandwidth, etc... Be aware that only newer clients can take advantage of nvenc at present (r4722 needs backporting). It may also take a few seconds for nvenc to beat x264 in our internal scoring system which decides the combination of encoder and csc modules to use.
Things that will probably be addressed in a follow up ticket for the next milestone:
max_block_sizes
, max_grid_sizes
and max_threads_per_block
- doesn't seem to be causing problems yet
YUV444P
mode - needs docs (apparently not supported by the hardware??)
NV12
/ YUV444P
)
nvEncReconfigureEncoder
(with edge resistance if it causes a new IDR frame)
inputBuffer
)
Lower priority still:
gpuGetMaxGflopsDeviceId: max_gflops = device_properties.multiProcessorCount * device_properties.clockRate;
Those have been moved to #466
this crash occurred as I killed xpra with SIGINT.. hopefully rare and due to SIGINT
Tested and working with fedora 20 server.
Started with this command line.
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/cuda/lib64:/usr/lib64/nvidia \ xpra --bind-tcp=0.0.0.0:1400 --start-child="xterm -fg white -bg black" \ --no-daemon --encryption=AES --password-file=./passtest start :14
see #517
Note for those landing here: NVENC is not safe to use in versions older than 0.15 because of a context leak due to threading.
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/370