FileSink not working on RTMP Streams

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version10.3
**• NVIDIA GPU Driver Version (valid for GPU only)**Driver Version: 572.16
• Issue Type( questions, new requirements, bugs)

  • The deepstream pipeline is perfectly working on file sources and RTSP live streams with file sinking but somehow this is not working on RTMP stream url when i link the filesink using the tee element.

Linking:

  • StreamMuxPGIE (Face Detection)

  • PGIETracker (Tracking)

  • TrackerSGIE Emotion (Emotion Detection)

  • SGIE EmotionSGIE Gaze (Gaze Detection)

  • SGIE GazeConverter

  • ConverterOSD (Overlay text)

  • OSDTee (Split output)

  • TeeDisplay Sink (Display on screen)

  • TeeFile Output (Record video)

  • TeeRTSP Output (Stream video)

  • The pipeline starts and gracefully shutdown but it only starts for 1 second and stops right away.
    these are the logs here:

  • ERROR: gst-stream-error-quark: Could not multiplex stream. (10): …/gst/isomp4/gstqtmux.c(5402): gst_qt_mux_add_buffer (): /GstPipeline:deepstream-combined-pipeline/GstMP4Mux:mp4-mux:
    Buffer has no PTS.
    DEBUG: EOS sent, waiting for finalization…
    Debug: Object ID: 0, needs_infer: TRUE, classifier_async_mode: FALSE
    [Debug] m_ClassifierThreshold: 0.4
    [Debug] m_Labels structure:
    Index 0 (size 1): Angry
    Index 1 (size 1): Disgust
    Index 2 (size 1): Fear
    Index 3 (size 1): Happy
    Index 4 (size 1): Sad
    Index 5 (size 1): Surprise
    Index 6 (size 1): Neutral
    [Debug] Number of attributes (output layers): 1
    [Debug] Layer index = 0, numClasses = 7
    Class 0 => Probability = 0.0698886
    Class 1 => Probability = 0.00425511
    Class 2 => Probability = 0.48184
    [Debug] Updated maxProbability for Class 2 => 0.48184
    [Debug] After attrFound=true: maxProbability=0.48184, attr.attributeIndex=0, attr.attributeValue=2
    Class 3 => Probability = 0.00365377
    Class 4 => Probability = 0.0574322
    Class 5 => Probability = 0.00291371
    Class 6 => Probability = 0.380017
    [Debug] Attribute Label Found: Fear
    [Debug] Appended Attribute Label to attrString: Fear
    [Debug] Final Attribute String: Fear
    [attach_metadata_classifier] nvinfer->process_full_frame: 0
    [attach_metadata_classifier] Display Text: face 0 Fear
    [attach_metadata_classifier] Classifier metadata attached for object ID=0
    [DEBUG] Emotion SGIE Probe called.
    [Emotion SGIE Probe] Updated global emotion to: Fear
    [Tracker Probe] Object ID=0, class_id=0, left=850.1370849609375, top=155.57809448242188, width=411.9052734375, height=556.9366455078125

*** DeepStream: Launched RTSP Streaming at rtsp://172.30.5.232:8555/ds-test ***

nvstreammux: Successfully handled EOS for source_id=0
DEBUG: Pipeline stopped.
DEBUG: CSV log saved to gaze_emotion_logs_2025-03-12_13-30-29.csv
DEBUG: Video output saved to gaze_emotion_2025-03-12_13-30-29.mp4
[NvMultiObjectTracker] De-initialized

  • Although i don’t find any error with the RTSP and Source file, but when i try to run with the RTMP source then it is happening, when removing or unlink the file sinking from the pipeline, it is working fine.
    but i want it to work with also saving the output as well.

  • if RTMP can work then how to work it with filesink here

  • if this is not feasible, kindly provide solution to save output video here with rtmp streams.

from the error, mp4mux reported the error “Buffer has no PTS”. could you simplify the code to narrow down this issue? for example, if using " …->osd-> File Output (Record video)"? can the app work well? if using “…->nvstreammux->File Output** (Record video)”? can the app work well?

1 Like

@fanzh can you tell me what’s the official and right way to save the live RTSP or RTMP output annotated videos? i’ll go with them.

what do you mean about “to save the live RTSP or RTMP output annotated videos”? do you want to save the live stream to the local file?

@fanzh Suppose the inference is happening on live stream from rtsp or rtmp, now when i terminate the pipeline in cli using CTRL+C, so the pipeline gracefully stopped and the output video should be saved properly, something like this.

im not sure about the real way to do it, im just giving idea here.
so kindly tell me what is the best way to save the video?

please refer to the opensource \opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-transfer-learning-app\deepstream_transfer_learning_app_main.cpp. To be simple, plese use check_for_interrupt to monitor ctrl+c message, then call g_main_loop_quit.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.