Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
The nvcompositor plugin is not compatible with DeepStream SDK plugins. Suggest using DeepStream plugin nvmultistreamtiler, which is used to composite a 2D tile from batched buffers. Here is sample for how to use nvmultistreamtiler.
Can you explain why it is not compatible with"Deepstream" , is it a gstreamer plug-in ?
N
Try to run the sample but failed.
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! mux.sink_0 \
filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! mux.sink_1 \
nvstreammux name=mux batch-size=2 width=1920 height=1080 nvbuf-memory-type=3 ! nvmultistreamtiler ! nveglglessink
Setting pipeline to PAUSED ...
Using winsys: x11
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Pipeline is PREROLLING ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4550: => Surface type not supported for transformation NVBUF_MEM_CUDA_UNIFIED
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4550: => Surface type not supported for transformation NVBUF_MEM_CUDA_UNIFIED
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Setting pipeline to NULL ...
Freeing pipeline ...
Looking at the documentation of how do you expect that I do the same thing ?
Who is deciding it is not compatible any more ?
GStreamer version 1.0 includes the following proprietary NVIDIA plugins:
±--------------------------±----------------------------------------+
| NVIDIA proprietary plugin | Description |
+===========================+=========================================+
| nvarguscamerasrc | Camera plugin for ARGUS API |
| | |
| | .. todo:: |
| | ARGUS is used in all caps in several |
| | places, something I have not seen |
| | anywhere else. Is there a reason why |
| | it’s used that way here? |
| | |
±--------------------------±----------------------------------------+
| nvv4l2camerasrc | Camera plugin for V4L2 API |
±--------------------------±----------------------------------------+
| nvvidconv | Video format conversion and scaling |
±--------------------------±----------------------------------------+
| nvcompositor | Video compositor |
±--------------------------±----------------------------------------+
| nveglstreamsrc | Acts as GStreamer Source Component, |
| | accepts EGLStream from EGLStream |
| | producer |
±--------------------------±----------------------------------------+
| nvvideosink | Video Sink Component. Accepts YUV-I420 |
| | format and produces EGLStream (RGBA) |
±--------------------------±----------------------------------------+
| nvegltransform | Video transform element for NVMM to |
| | EGLimage (supported with nveglglessink |
| | only) |
±--------------------------±----------------------------------------+
Since nvcompositor is not deepstream plugin, it may not work with some deepstream plugins. please refer to this topic. hence Suggest use DeepStream plugn nvmultistreamtiler.
you might use this pipeline “…->nvmultistreamtiler->nvideoconvert->appsink”. nvmultistreamtiler is responsible for mixing the videos. nvideoconvert converts the video format. then you can call the third-part display lib in appsink for rendering.
From your pipeline, the command-line mixes two sources. One is from v4l2src and the other is from videotestsrc. modifing my last comand-line can meet your requirement. To be specific, you can repalce the filesrc with new source plugin, then nvstreammux forms a batch of frames from multiple input sources, then nvinfer, nvtracker and other plugns may be inserted after nvstreammux, then nvmultistreamtiler composites a 2D tile from batched buffers, finally you may use nv3dsink or appsink to display the buffers.
Thanks for the sharing! nvmultistreamtiler only supports compositing side by side currently. nvcompositor supports overlaying compositing but without alpha blending.
Hi,
The nvcompositor plugin is not supported in DeepStream SDK. If you have DeepStream plugins in your gstreamer pipeline, please use nvmultistreamtiler plugin.
This like apple and orange, you cannot compare them, and i’m using it as a Gstreamer element.
To illustrate to make sure that is not communication barrier, the result of mixing graphics and processed video, and how to do the same with the tiler that you are recommended.
As an application engineer, what would recommend ? nvcompositor does not work as expected and is not supported, you propose a solution based on the tiler element which in fact not a solution, and making the link to another issue :
**
So are telling me that with Jetson type device we cannot overlay graphic and live video
Hi,
As suggested you can map NVMM buffer to NvBufSurface, and then map to EglImage. This is supported. If you create EglImage first and would like to map it to NVMM buffer, this is not supported.
In your use-case, does it work to map NVMM buffer to NvBufSurface, map to EglImage, and use the EglImage in the gui app? If you must create EglImage in the gui app, memory copy is required.