Hi,
What is the recommended way to capture csi camera frames on jetson orin nano / jetson devices using python.
Thanks,
Hi,
What is the recommended way to capture csi camera frames on jetson orin nano / jetson devices using python.
Thanks,
*** Please note that this reply is generated by LLM automatically ***
Capturing CSI camera frames on Jetson Orin Nano using Python involves several steps. Here’s a comprehensive guide to help you get started:
Prerequisites
Recommended Approach
nvgstcapture application: This is a command-line tool provided by NVIDIA to capture CSI camera frames. You can use it to capture frames and save them to a file.nvgstcapture-1.0 --help
This will display the available options for capturing frames.
gst-python library: This is a Python wrapper for the GStreamer library, which is used by nvgstcapture. You can use this library to capture CSI camera frames in Python.import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst
# Initialize GStreamer
Gst.init(None)
# Create a pipeline to capture CSI camera frames
pipeline = Gst.Pipeline()
# Add a CSI camera source to the pipeline
src = Gst.ElementFactory.make('nvcamerasrc', 'src')
pipeline.add(src)
# Add a video sink to the pipeline to display the frames
sink = Gst.ElementFactory.make('nvvidconv', 'sink')
pipeline.add(sink)
# Link the elements in the pipeline
src.link(sink)
# Set the pipeline to playing state
pipeline.set_state(Gst.State.PLAYING)
# Capture frames and save them to a file
while True:
# Get the next frame from the pipeline
buffer = pipeline.get_buffer()
# Save the frame to a file
with open('frame.jpg', 'wb') as f:
f.write(buffer.data)
# Display the frame using OpenCV (optional)
import cv2
frame = cv2.imdecode(buffer.data, cv2.IMREAD_COLOR)
cv2.imshow('Frame', frame)
cv2.waitKey(1)
This code captures CSI camera frames and saves them to a file named frame.jpg. You can modify the code to display the frames using OpenCV or perform other processing tasks.
Comparison with Other Approaches
Other approaches to capturing CSI camera frames on Jetson Orin Nano using Python include:
pygst library: This is another Python wrapper for the GStreamer library. However, it is not as widely used as gst-python and may not be as well-maintained.opencv-python library: This library provides a Python interface to OpenCV, which can be used to capture CSI camera frames. However, it may not provide the same level of control as gst-python or nvgstcapture.In summary, the recommended approach to capturing CSI camera frames on Jetson Orin Nano using Python is to use the gst-python library or the nvgstcapture application. These provide a robust and efficient way to capture frames and perform processing tasks.
*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***
Use the gst-python library: This is a Python wrapper for the GStreamer library, which is used by nvgstcapture. You can use this library to capture CSI camera frames in Python.
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst
# Initialize GStreamer
Gst.init(None)
# Create a pipeline to capture CSI camera frames
pipeline = Gst.Pipeline()
# Add a CSI camera source to the pipeline
src = Gst.ElementFactory.make('nvcamerasrc', 'src')
pipeline.add(src)
# Add a video sink to the pipeline to display the frames
sink = Gst.ElementFactory.make('nvvidconv', 'sink')
pipeline.add(sink)
# Link the elements in the pipeline
src.link(sink)
# Set the pipeline to playing state
pipeline.set_state(Gst.State.PLAYING)
# Capture frames and save them to a file
while True:
# Get the next frame from the pipeline
buffer = pipeline.get_buffer()
# Save the frame to a file
with open('frame.jpg', 'wb') as f:
f.write(buffer.data)
# Display the frame using OpenCV (optional)
import cv2
frame = cv2.imdecode(buffer.data, cv2.IMREAD_COLOR)
cv2.imshow('Frame', frame)
cv2.waitKey(1)
This code captures CSI camera frames and saves them to a file named frame.jpg. You can modify the code to display the frames using OpenCV or perform other processing tasks.
Comparison with Other Approaches
Other approaches to capturing CSI camera frames on Jetson Orin Nano using Python include:
Using the pygst library: This is another Python wrapper for the GStreamer library. However, it is not as widely used as gst-python and may not be as well-maintained.
Using the opencv-python library: This library provides a Python interface to OpenCV, which can be used to capture CSI camera frames. However, it may not provide the same level of control as gst-python or nvgstcapture.
In summary, the recommended approach to capturing CSI camera frames on Jetson Orin Nano using Python is to use the gst-python library or the nvgstcapture application. These provide a robust and efficient way to capture frames and perform processing tasks.
Hi,
I get the following error when i run the above code.
Traceback (most recent call last):
File "/home/ps_nithin/cam.py", line 13, in <module>
pipeline.add(src)
TypeError: Argument 1 does not allow None as a value
Hi,
import gi
import numpy as np
import time
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject, GLib
from PIL import Image
def on_new_sample(sink, data):
sample = sink.emit("pull-sample")
print("here")
if sample:
buffer = sample.get_buffer()
caps = sample.get_caps()
# Extract frame data and convert to OpenCV format
# This part depends on the format in your pipeline (e.g., RGB)
# For BGR, you can directly create a NumPy array
# Example for RGB format:
width = caps.get_structure(0).get_value("width")
height = caps.get_structure(0).get_value("height")
# Get raw buffer data
success, mapinfo = buffer.map(Gst.MapFlags.READ)
if success:
frame_data = np.ndarray(
(height, width, 3), # Assuming RGB format
buffer=mapinfo.data,
dtype=np.uint8
)
buffer.unmap(mapinfo)
# Process the frame
Image.fromarray(frame_data).convert('RGB').save("gst.png")
else:
print("Failed to map buffer")
return Gst.FlowReturn.OK
Gst.init(None)
pipeline_string = (
"nvarguscamerasrc sensor-id=0 ! " # Adjust sensor-id if multiple cameras
"video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)NV12, framerate=(fraction)30/1 ! "
"nvvidconv ! "
"video/x-raw, format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw, format=(string)RGB ! "
"appsink name=mysink"
)
pipeline = Gst.parse_launch(pipeline_string)
appsink = pipeline.get_by_name("mysink")
appsink.set_property("emit-signals", True)
appsink.connect("new-sample", on_new_sample, None) # Define on_new_sample function to process frames
pipeline.set_state(Gst.State.PLAYING)
loop = GLib.MainLoop()
try:
loop.run()
except KeyboardInterrupt:
pass
finally:
pipeline.set_state(Gst.State.NULL)
loop.quit()
The above code works but the output is not realtime. It is not giving the current frame. What could be the reason?
Thanks,
The code works ok when frame processing is simple. But if i add time.sleep(3) after Image.fromarray(frame_data).convert('RGB').save("gst.png") the lag happens. How can this be fixed.
Thanks,
Using
sample = appsink.emit(“try-pull-sample”, 100*Gst.MSECOND) helped.
Hi @ps_nithin !
Hope you are doing well!
Recently I developed a Python application for a media server using gst-python and I documented it on this wiki:
Media Server with GStreamer: Python-based Video Streaming, Recording, and Snapshot Solution.
It includes an example with live streaming, snapshots, and recording — all in Python.
You can adapt it to your use case by modifying the main_pipeline_description to use your camera as the input.
I thought it might be helpful as you work on your own Python application.
Nico
Best regards
Embedded Software Engineer at ProventusNova
Hi,
Thanks for sharing,
I worked out the code.
But some other things got broken. On jetson nano jp4.6.6 , gst-python runs without issues with python3.6. But i need python3.8 for numba. Numba runs in conda environment in which i am not able to install python3.6. I cant install numba outside conda with pip as it fails. Now i am left with not using gst-python.
Thanks,
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.