TAPPAS User Guide
TAPPAS User Guide
User Guide
Release 3.16.0
March 2022
Hailo TAPPASTM | User Guide
1 Disclaimer and Proprietary Information Notice
1.1 Copyright
© 2022 Hailo Technologies Ltd (“Hailo”). All Rights Reserved. No part of this document may be reproduced
or transmitted in any form without the expressed, written permission of Hailo. Nothing contained in this
document should be construed as granting any license or right to use proprietary information for that
matter, without the written permission of Hailo. This version of the document supersedes all previous
versions.
Overview
Hailo’s TAPPAS (Template APPlications And Solutions) is an infrastructure designed for
easy development and deployment of high-performance edge applications based on
the industry-leading Hailo-8™ AI processor . Hailo TAPPAS is pre-packaged with a rich
set of applications built on top of state-of-the-art deep neural networks, demonstrating
Hailo’s best-in-class throughput and power efficiency. For users seeking to quickly
deploy their own networks on the Hailo-8, the TAPPAS provides an easy-to-use,
GStreamer-based template for application development.
Changelog
TAPPAS v3.16.0 (March 2022)
New Apps:
Hailo Century app - Demonstrates detection on one video file source over 6
different Hailo-8 devices
Python app - A classification app using a post-process written in Python
New Elements:
Tracking element "HailoTracker" - Add tracking capabilities
Python element "HailoPyFilter" - Enables to write post-processes using
Python
Yocto Hardknott is now supported
Raspberrypi 4 Ubuntu dedicated apps
HailoCropper cropping bug fixes
HailoCropper now accepts cropping method as a shared object (.so)
Table of Contents
1. Getting Started
1. Prerequisites
2. Getting Started
3. Verify Hailo Installation
4. GStreamer
5. Hailo GStreamer Concepts
6. Where to Go From Here?
7. Terminology
8. Useful Links
2. GST-launch based X86 applications
1. Sanity pipeline
2. Detection Pipeline
3. Instance Segmentation Pipeline
4. Depth Estimation Pipeline
5. Detection and Depth Estimation Pipelines
6. Multi-Stream Pipeline
7. Pose Estimation Pipeline
8. Segmentation Pipeline
9. Facial Landmarks Pipeline
10. Face Detection Pipeline
11. Face Detection and Facial Landmarking Pipeline
12. Tiling Pipeline
13. Classification Pipeline
14. Multi-stream Multi-device Pipeline
15. Detection and Depth Estimation - networks switch App
16. Python Classification Pipeline
17. Century Pipeline
3. GST-launch based ARM applications
1. Detection Pipeline
4. GST-launch based Raspberry Pi applications
1. Sanity Pipeline
2. Detection
3. Depth Estimation
4. Multinetworks parallel
5. Pose Estimation
6. Face Detection
7. Classification
5. Native C++ applications
1. Detection
6. Hailo GStreamer Elements
7. HailoNet
1. HailoFilter
2. HailoFilter2
3. HailoPython
4/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
4. HailoOverlay
5. HailoMuxer
6. HailoDeviceStats
7. HailoAggregator
8. HailoCropper
9. HailoTileAggregator
10. HailoTileCropper
8. Installation
1. Docker Install
2. Manual Install
3. Yocto
4. Cross Compile
9. Further Reading
1. GStreamer Framework
2. Debugging with GstShark
3. Debugging with Gst-Instruments
10. Writing Your Own Postprocess
1. Getting Started
2. Compiling and Running
3. Filter Basics
4. Next Steps (Drawing)
11. Writing Your Own Python Postprocess
1. Overview
2. Getting Started
3. Next Steps (Drawing)
Hailo-8 device - Check that your board is recognized by opening a terminal and
running: lspci -d 1e60:. You should get in response: bb:dd:ff Co-processor:
Hailo Technologies Ltd. Hailo-8 AI Processor (rev 01)
Getting started
X86
We provide three installation methods:
The simple and recommended installation method is detailed in the Docker install
guide
If you already have a pre built docker image follow our instructions for Running
TAPPAS container from pre-built Docker image
Arm
We provide two installation methods:
Yocto - integration of Hailo layers in embedded BSP Read more about Yocto
installation
Cross compilation - cross compile Hailo GStreamer plugins and post processes
Read more about the cross compilation
GStreamer
GStreamer is a framework for creating streaming media applications.
Hailo Concepts
Network encapsulation- Since in a configured network group, there are only
input and output layers a GstHailoNet will be associated to a "Network" by its
configured input and output pads
GStreamer Hailo decoupling- Applicative code will use Hailo API and as such
will be GStreamer independent. This will help us build and develop the NN and
postprocessing functionality in a controlled environment (with all modern IDE and
debugging capabilities).
Context control- Our elements will be contextless and thus leave the context
(thread) control to the pipeline builder
GStreamer reuse- our pipeline will use as many off the shelf GStreamer
elements
Hailo Elements
hailonet - Element for sending and reciveing data from Hailo-8 chip
hailofilter - Element which enables the user to apply a postprocess or drawing
operation to a frame and its tensors
hailomuxer - Muxer element used for Multi-Hailo-8 setups
hailodevicestats - Hailodevicestats is an element that samples power and
temperature
hailopython - Element which enables the user to apply a postprocess or drawing
operation to a frame and its tensors via python.
hailoaggregator - HailoAggregator is an element designed for application with
cascading networks. It has 2 sink pads and 1 source
hailocropper - HailoCropper is an element designed for application with cascading
networks. It has 1 sink and 2 sources
hailotileaggregator - HailoTileAggregator is an element designed for application
with tiles. It has 2 sink pads and 1 source
hailotilecropper - HailoTileCropper is an element designed for application with
tiles. It has 1 sink and 2 sources
Terminology
NVR (Network Video Recorder)
NVR is a specialized hardware and software solution used in IP (Internet Protocol) video
surveillance systems. In most cases, the NVR is intended for obtaining video streams
from the IP cameras (via the IP network) for the purpose of storage and subsequent
playback.
Useful Links
Some useful GStreamer debugging tools:
Writhing your own postprocess - A detailed guide about how to write your own
postprocess
Debugging tips - Debugging tips from our experience
Cross compile - A cross-compilation guide
GstShark - Profiling tool for GStreamer
GstInstruments - Basic debugging tool for GStreamer
1. Sanity Pipeline - Helps you verify that all the required components are installed
correctly
2. Detection - single-stream object detection pipeline on top of GStreamer using the
Hailo-8 device.
3. Depth Estimation - single-stream depth estimation pipeline on top of GStreamer
using the Hailo-8 device.
4. Multinetworks parallel - single-stream multi-networks pipeline on top of
GStreamer using the Hailo-8 device.
5. Instance segmentation - single-stream instance segmentation on top of
GStreamer using the Hailo-8 device.
6. Multi-stream detection - Multi stream object detection (up to 8 RTSP cameras into
one Hailo-8 chip).
7. Pose Estimation - Human pose estimation using centerpose network.
8. Segmentation - Semantic segmentation using resnet18_fcn8 network on top of
GStreamer using the Hailo-8 device.
9. Facial Landmarks - Facial landmarking application.
10. Face Detection - Face detection application.
11. Face Detection and Facial Landmarking Pipeline - Face detection and then facial
landmarking.
12. Tiling - Single scale tiling detection application.
13. Classification - Classification app using resnet_v1_50 network.
14. Multi-stream Multi-device - Demonstrates Hailo's capabilities using multiple-chips
and multiple-streams.
15. Detection and Depth Estimation - networks switch App - Demonstrates Hailonet
network-switch capability.
16. Python Classification Pipeline - Classification app using resnet_v1_50 with
python post-processing.
17. Century Pipeline - demonstrates detection on one video file source over 6
different Hailo-8 devices.
Sanity pipeline
Overview
Sanity apps purpose is to help you verify that all the required components have been
installed successfully.
First of all, you would need to run sanity_gstreamer.sh and make sure that the image
presented looks like the one that would be presented later.
Sanity GStreamer
This app should launch first.
NOTE: Open the source code in your preferred editor to see how simple this app
is.
In order to run the app just cd to the sanity_pipeline directory and launch the app
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/sanity_pipeline
./sanity_gstreamer.sh
If the output is similar to the image shown above, you are good to go to the next
verification phase!
Detection Pipeline
Overview:
detection.sh demonstrates detection on one video file source and verifies Hailo’s
configuration. This is done by running a single-stream object detection pipeline
on top of GStreamer using the Hailo-8 device.
Options
Supported Networks:
'YoloV5' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/yolov5m.yaml
'YoloV4' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/yolov4_leaky.yaml
'YoloV3' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/yolov3_gluon.yaml
'Mobilenet_ssd' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/ssd_mobilenet_v1.
yaml
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/detection
./detection.sh
How it works
This section is optional and provides a drill-down into the implementation of the
detection app with a focus on explaining the GStreamer pipeline. This section uses
yolov5 as an example network so network input width, height, and hef name are set
accordingly.
gst-launch-1.0 \
filesrc location=$video_device ! decodebin ! videoconvert ! \
videoscale ! video/x-raw,width=640,height=640,pixel-aspect-
ratio=1/1 ! \
queue ! \
hailonet hef-path=$hef_path debug=False is-active=true qos=false
batch-size=8 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter function-name=yolov5 so-path=$POSTPROCESS_SO
qos=false debug=False ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$DRAW_POSTPROCESS_SO qos=false debug=False !
\
videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}
Specifies the location of the video used, then decodes and converts to the
required format.
2. videoscale ! video/x-raw,pixel-aspect-ratio=1/1 ! \
Re-scale the video dimensions to fit the input of the network. In this case it is
rescaling the video to 640x640 with the caps negotiation of hailonet.
Before sending the frames into the hailonet element, set a queue so no frames
are lost (Read more about queues here)
Each hailofilter performs a given post-process. In this case the first performs
the Yolov5m post-process and the second performs box drawing.
6. videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
GstFileSrc GstTypeFindElement GstQTDemux GstMultiQueue GstH264Parse GstCapsFilter avdec_h264 GstVideoScale GstCapsFilter GstVideoConvert GstQueue GstHailoSend GstQueue GstHailoRecv GstQueue GstHailoFilter GstQueue GstHailoFilter GstVideoConvert GstXImageSink
src_0 typefind qtdemux0 multiqueue0 h264parse0 capsfilter1 avdec_h264-0 videoscale0 capsfilter0 videoconvert0 queue0 hailosend hailo_infer_q_0 hailorecv queue1 hailofilter0 queue2 hailofilter1 videoconvert1 xvimagesink0
Time: 14.7 ms Time: 5.50 ms Time: 19.7 ms Time: 22.7 ms Time: 31.9 ms Time: 61.0 ms Time: 185 ms Time: 10.3 ms Time: 5.35 ms Time: 937 ms Time: 22.7 ms Time: 797 ms Time: 8.83 ms Time: 305 ms Time: 22.1 ms Time: 219 ms Time: 21.8 ms Time: 120 ms Time: 602 ms Time: 62.1 ms
( 0.4%) ( 0.2%) ( 0.6%) ( 0.7%) ( 0.9%) ( 1.8%) ( 5.3%) ( 0.3%) ( 0.2%) ( 27.0%) ( 0.7%) ( 22.9%) ( 0.3%) ( 8.8%) ( 0.6%) ( 6.3%) ( 0.6%) ( 3.5%) ( 17.3%) ( 1.8%)
CPU: 0.2% CPU: 0.1% CPU: 0.2% CPU: 0.3% CPU: 0.4% CPU: 0.7% CPU: 2.1% CPU: 0.1% CPU: 0.1% CPU: 10.6% CPU: 0.3% CPU: 9.0% CPU: 0.1% CPU: 3.5% CPU: 0.3% CPU: 2.5% CPU: 0.2% CPU: 1.4% CPU: 6.8% CPU: 0.7%
2.28 MiB 2.28 MiB 2.27 MiB 2.25 MiB 2.25 MiB 2.25 MiB 243 MiB 243 MiB 243 MiB 383 MiB 347 MiB 347 MiB 328 MiB 328 MiB 292 MiB 292 MiB 255 MiB 255 MiB 341 MiB
Depth Estimation
depth_estimation.sh demonstrates depth estimation on one video file source. This is
done by running a single-stream object depth estimation pipeline on top of
GStreamer using the Hailo-8 device.
Options
Run
cd /local/workspace/tappas/apps/gstreamer/x86/depth_estimation
./depth_estimation.sh
Model
fast_depth in resolution of 224X224X3.
How it works
This section is optional and provides a drill-down into the implementation of the depth
estimation app with a focus on explaining the GStreamer pipeline. This section uses
fast_depth as an example network so network input width, height, hef name, are set
accordingly.
Specifies the location of the video used, then decodes and converts to the
required format.
Re-scales the video dimensions to fit the input of the network. In this case it is
cropping the video and rescaling the video to 224x224 with the caps negotiation
of hailonet.
Before sending the frames into hailonet element, set a queue so no frames are
lost (Read more about queues here)
6. videoconvert ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
Overview:
instance_segmentation.sh demonstrates instance segmentation on one video file
source and verifies Hailo’s configuration. This is done by running a single-stream
instance segmentation pipeline on top of GStreamer using the Hailo-8 device.
Options
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/instance_segmentation
./instance_segmentation.sh
How it works
This section is optional and provides a drill-down into the implementation of the
instance_segmentation app with a focus on explaining the GStreamer pipeline. This
section uses yolact_regnetx_800mf_fpn_20classes as an example network so
network input width, height, and hef name are set accordingly.
gst-launch-1.0 \
filesrc location=$video_device ! decodebin ! videoconvert ! \
videoscale ! video/x-raw,width=512,height=512,pixel-aspect-
ratio=1/1 ! \
queue queue leaky=no max-size-buffers=30 max-size-bytes=0 max-
size-time=0 ! \
hailonet hef-path=$hef_path debug=False is-active=true qos=false
batch-size=8 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$POSTPROCESS_SO qos=false debug=False ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$DRAW_POSTPROCESS_SO qos=false debug=False !
\
videoconvert ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display sync=true
text-overlay=false ${additonal_parameters}
Specifies the location of the video used, then decodes and converts to the
required format.
2. videoscale ! video/x-raw,pixel-aspect-ratio=1/1 ! \
Re-scale the video dimensions to fit the input of the network. In this case it is
rescaling the video to 512x512 with the caps negotiation of hailonet.
Before sending the frames into the hailonet element, set a queue so no frames
are lost (Read more about queues here)
Each hailofilter performs a given post-process. In this case the first performs
the yolact post-process and the second performs box and segmentation mask
drawing.
6. videoconvert ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
Run
cd /local/workspace/tappas/apps/gstreamer/x86/multinetworks_parallel
./detection_and_depth_estimation.sh
Model
fast_depth in resolution of 224X224X3.
mobilenet_ssd in resolution of 300X300X3.
How it works
This section is optional and provides a drill-down into the implementation of the app
with a focus on explaining the GStreamer pipeline. This section uses fast_depth as an
example network so network input width, height, hef name, are set accordingly.
gst-launch-1.0 \
$source_element ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
videobox autocrop=true ! video/x-raw, width=1200, height=700,
pixel-aspect-ratio=1/1 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
videoscale ! video/x-raw, width=300, height=300 ! queue ! \
tee name=t ! \
videoscale ! video/x-raw, width=224, height=224! videoconvert ! \
20/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path debug=False is-active=true
inputs=$depth_estimation_inputs outputs=$depth_estimation_outputs
qos=false batch-size=1 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$depth_estimation_draw_so qos=false
debug=False ! \
videoscale ! video/x-raw, width=300, height=300 ! \
comp.sink_0 \
t. ! \
videoconvert ! \
hailonet hef-path=$hef_path debug=False is-active=true
inputs=$detection_inputs outputs=$detection_outputs qos=false batch-
size=1 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$detection_post_so function-
name=mobilenet_ssd_merged qos=false debug=False ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$detection_draw_so qos=false debug=False ! \
comp.sink_1 \
compositor name=comp start-time-selection=0 $compositor_locations
! queue ! videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=false text-overlay=false ${additonal_parameters}
Specifies the location of the video used, then decodes and converts to the
required format.
Re-scales the video dimensions to fit the input of the network. In this case it is
cropping the video and rescaling the video to 224x224 with the caps negotiation
of hailonet.
3. tee name=t !
Split into two threads - one for mobilenet_ssd and the other for fast_depth.
Before sending the frames into hailonet element, set a queue so no frames are
lost (Read more about queues here)
NOTE: We pre define the input and the output layers of each network,
giving the net name argument.
7. compositor ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
Overview
This GStreamer pipeline demonstrates object detection on 8 camera streams over
RTSP protocol.
All the streams are processed in parallel through the decode and scale phases, and
enter the Hailo device frame by frame.
Afterwards postprocess and drawing phases add the classified object and bounding
boxes to each frame.
The last step is to match each frame back to its respective stream and output all of
them to the display.
Real Time Streaming Protocol (RTSP) is a network control protocol designed for
use in entertainment and communications systems to control streaming media servers.
The protocol is used for establishing and controlling media sessions between endpoint.
Prerequisites
TensorPC
Ubuntu 18.04
RTSP Cameras, We recommend using: AXIS M10 Network Cameras
Hailo-8 device connected via PCIe
Preparations
1. Before running, configuration of the RTSP camera sources is required. open the
multistream_pipeline.sh in edit mode with your preffered editor. Configure the
eight sources to match your own cameras.
./multistream_pipeline.sh
3. --num-of-sources sets the number of rtsp sources to use by given input. the
default and recommended value in this pipeline is 8 sources"
4. --debug uses gst-top to print time and memory consuming elements, saves the
results as text and graph.
Open the pipeline_report.txt to view the full report showing all elements, your
report should be similar to this:
NOTE: When the debug flag is used and the app is running inside of a docker,
exit the app by tying Ctrl+C in order to save the results. (Due to docker X11
display communication issues)
Model
YOLOv5 is a modern object detection architecture that is based on the YOLOv3 meta-
architecture with CSPNet backbone. The YOLOv5 was released on 05/2020 with a very
efficient design and SoTA accuracy results on the COCO benchmark.
rtspsrc makes a connection to an rtsp server and read the data. used as a src to
get the video stream from rtsp-cameras.
vaapidecodebin video decoding and scaling - this element uses vaapi hardware
acceleration to improve the pipeline performace. Video Acceleration API (VA-API)
is an open source API made by Intel that allows applications to use hardware
video acceleration capabilities, usually provided by the GPU. It is implemented by
libva library and combined with a hardware-specific driver.
In this pipeline, the bin is responsible for decoding h264 format and scaling the
frame to 640X640. It contains the following elements:
funnel takes multiple input sinks and outputs one source. an N-to-1 funnel that
attaches a streamid to each stream, can later be used to demux back into
separate streams. this lets you queue frames from multiple streams to send to
the hailo device one at a time.
fpsdisplaysink outputs video into the screen, and displays the current and
average framerate.
Entire pipeline
?
pipeline0
Time: 0 ns
( 0.0%)
CPU: 0.0%
GstRTSPSrc
source_0
Time: 0 ns
( 0.0%)
CPU: 0.0%
GstRtpBin
manager
Time: 0 ns
( 0.0%)
GstUDPSrc CPU: 0.0%
udpsrc3
Time: 75.2 ms GstRtpSession GstRtpStorage GstRtpSsrcDemux GstRtpJitterBuffer
( 0.1%) rtpsession0 rtpstorage0 rtpssrcdemux0 rtpjitterbuffer0
CPU: 0.1% Time: 158 ms Time: 73.9 ms Time: 226 ms Time: 92.4 ms
( 0.2%) ( 0.1%) ( 0.3%) ( 0.1%)
CPU: 0.2% CPU: 0.1% CPU: 0.3% CPU: 0.1%
GstRtpPtDemux
5.53 MiB 5.53 MiB 5.53 MiB 504 B
rtpptdemux0
GstUDPSrc Time: 88.1 ms
udpsrc4 ( 0.1%)
Time: 0 ns CPU: 0.1%
504 B 5.53 MiB
( 0.0%) 5.53 MiB
CPU: 0.0%
5.53 MiB
504 B 1.48 KiB
GstRtpH264Depay
rtph264depay0
Time: 170 ms
GstFakeSrc GstUDPSink ( 0.2%)
fakesrc0 udpsink0 CPU: 0.2%
Time: 0 ns Time: 122 us
( 0.0%) ( 0.0%)
CPU: 0.0% CPU: 0.0%
1000 B
GstRTSPSrc
source_1
Time: 0 ns
( 0.0%)
CPU: 0.0%
GstRtpBin
manager
Time: 0 ns
( 0.0%)
GstUDPSrc
CPU: 0.0%
udpsrc0
Time: 69.0 ms
GstRtpSession
( 0.1%) GstRtpSsrcDemux GstRtpJitterBuffer
rtpsession1
CPU: 0.1% rtpssrcdemux1 rtpjitterbuffer1
Time: 146 ms
Time: 205 ms Time: 84.8 ms
( 0.2%)
( 0.2%) ( 0.1%)
CPU: 0.2%
GstRtpStorage CPU: 0.2% CPU: 0.1%
rtpstorage1 5.45 MiB
Time: 62.5 GstRtpPtDemux
7.46 MiB 504ms
B 504 B rtpptdemux1
( 0.1%)
GstUDPSrc Time: 89.2 ms
CPU: 0.1%
udpsrc1 ( 0.1%)
Time: 0 ns CPU: 0.1%
( 0.0%) 7.46 MiB 7.46 MiB 7.46 MiB
CPU: 0.0% 7.46 MiB
GstFakeSrc GstUDPSink
fakesrc1 udpsink1
Time: 0 ns Time: 174 us
( 0.0%) ( 0.0%) GstRtpH264Depay
CPU: 0.0% CPU: 0.0% rtph264depay1
Time: 153 ms
( 0.2%)
1000 B CPU: 0.2% GstVaapiDecodeBin
vaapidecodebin0
Time: 0 ns
( 0.0%)
CPU: 0.0%
GstRTSPSrc
source_2 GstQueue GstVaapiDecode GstQueue GstCapsFilter GstVaapiPostproc
Time: 0 ns hailo_preprocess_q_0 vaapidecode0 vaapi-queue capsfilter8 vaapipostproc0
( 0.0%) Time: 273 ms Time: 528 ms Time: 156 ms Time: 109 ms Time: 518 ms
CPU: 0.0% ( 0.3%) ( 0.6%) ( 0.2%) ( 0.1%) ( 0.6%)
CPU: 0.3% CPU: 0.6% CPU: 0.2% CPU: 0.1% CPU: 0.6%
GstFakeSrc GstUDPSink GstUDPSink
fakesrc7 udpsink14 udpsink15
Time: 0 ns Time: 318 us Time: 1.74 ms 5.45 MiB 7.53 GiB 7.53 GiB 7.53 GiB
7.23 MiB 1.48 GiB
( 0.0%) ( 0.0%) ( 0.0%)
CPU: 0.0% CPU: 0.0% CPU: 0.0%
1000 B
GstQueue GstVaapiDecodeBin
hailo_preprocess_q_1 vaapidecodebin1
GstRtpBin Time: 247 ms Time: 0 ns
manager ( 0.3%) ( 0.0%)
Time: 0 ns CPU: 0.3% CPU: 0.0% GstVideoConvert
1.48 KiB ( 0.0%) videoconvert0
GstUDPSrc CPU: 0.0% GstVaapiDecode GstQueue GstCapsFilter GstVaapiPostproc Time: 2.12 s
udpsrc17 vaapidecode1 vaapi-queue capsfilter9 vaapipostproc1 ( 2.4%)
Time: 68.7 ms GstRtpSession Time: 469 ms Time: 143 ms Time: 109 ms Time: 520 ms CPU: 2.5%
( 0.1%) 7.23 MiB ( 0.5%) ( 0.2%) ( 0.1%) ( 0.6%)
rtpsession7 GstCapsFilter
CPU: 0.1% Time: 111 ms CPU: 0.5% CPU: 0.2% CPU: 0.1% CPU: 0.6% capsfilter0
( 0.1%) Time: 31.6 ms
CPU: 0.1% GstRtpStorage GstRtpSsrcDemux GstRtpJitterBuffer 7.36 GiB 7.36 GiB 7.36 GiB ( 0.0%)
1.44 GiB 2.95 GiB
rtpstorage7 rtpssrcdemux7 rtpjitterbuffer6 CPU: 0.0%
Time: 48.3 ms Time: 197 ms Time: 62.8 ms
2.24 MiB ( 0.1%) ( 0.2%) ( 0.1%)
GstVideoConvert
GstUDPSrc CPU: 0.1% CPU: 0.2% CPU: 0.1% GstRtpPtDemux
GstRtpH264Depay videoconvert1
udpsrc18 rtpptdemux6 GstQueue
rtph264depay2 Time: 2.05 s
Time: 0 ns Time: 64.0 ms hailo_preprocess_q_2 ( 2.3%)
2.24 MiB 2.24 MiB 2.24 MiB Time: 119 ms
( 0.0%) ( 0.1%) Time: 243 ms
( 0.1%) CPU: 2.4%
CPU: 0.0% CPU: 0.1% ( 0.3%)
CPU: 0.1% GstVaapiDecodeBin
CPU: 0.3% GstCapsFilter
504 B vaapidecodebin2 capsfilter1
504 B 504 B 2.24 MiB 2.24 MiB Time: 0 ns
2.18 MiB Time: 31.7 ms
( 0.0%) ( 0.0%)
2.89 GiB
CPU: 0.0% CPU: 0.0%
GstQueue
2.18 MiB GstVaapiDecode GstQueue GstCapsFilter GstVaapiPostproc comp_q_6
GstRTSPSrc vaapidecode2 vaapi-queue capsfilter10 vaapipostproc2 2.95 GiB Time: 15.0 ms
source_3 GstVideoConvert
Time: 478 ms Time: 140 ms Time: 104 ms Time: 522 ms videoconvert2 ( 0.0%)
Time: 0 ns ( 0.5%) ( 0.2%) ( 0.1%) ( 0.6%) CPU: 0.0%
( 0.0%) Time: 2.10 s
CPU: 0.6% CPU: 0.2% CPU: 0.1% CPU: 0.6% ( 2.4%)
CPU: 0.0%
CPU: 2.4%
GstRtpBin 7.53 GiB 7.53 GiB 7.53 GiB 1.48 GiB
manager
Time: 0 ns GstCapsFilter
( 0.0%) capsfilter2 GstQueue
GstUDPSrc CPU: 0.0% 2.95 GiB Time: 32.1 ms comp_q_5
udpsrc6 ( 0.0%) Time: 18.2 ms
Time: 67.9 ms GstRtpSession CPU: 0.0% 1.65 GiB ( 0.0%) 1.37 GiB
( 0.1%) GstRtpSsrcDemux GstRtpJitterBuffer CPU: 0.0%
rtpsession2
CPU: 0.1% rtpssrcdemux2 rtpjitterbuffer2
Time: 187 ms
Time: 246 ms Time: 110 ms
( 0.2%)
( 0.3%) ( 0.1%)
CPU: 0.2%
GstRtpStorage CPU: 0.3% CPU: 0.1% GstVaapiDecodeBin
rtpstorage2 vaapidecodebin3
11.2 MiB Time: 92.0 GstRtpPtDemux
504ms
B 504 B rtpptdemux2
Time: 0 ns
GstQueue
( 0.1%) ( 0.0%)
GstUDPSrc Time: 124 ms GstFunnel GstStreamidDemux comp_q_4
CPU: 0.1% GstRtpH264Depay GstQueue CPU: 0.0% 1.99 GiB GstCompositor
udpsrc7 ( 0.1%) fun sid Time: 16.4 ms
rtph264depay3 hailo_preprocess_q_3 2.59 GiB comp
Time: 0 ns CPU: 0.1% Time: 1.25 s Time: 368 ms ( 0.0%)
11.2 MiB 11.2 MiB 11.2 MiB Time: 218 ms Time: 244 ms GstVaapiDecode GstQueue GstCapsFilter GstVaapiPostproc ( 1.4%) ( 0.4%) Time: 7.32 s
( 0.0%) 11.2 MiB ( 0.2%) GstVideoConvert CPU: 0.0% ( 8.2%)
CPU: 0.0% ( 0.3%) vaapidecode3 vaapi-queue capsfilter11 vaapipostproc3 CPU: 1.5% CPU: 0.4%
CPU: 0.3% CPU: 0.3% Time: 522 ms Time: 145 ms Time: 108 ms Time: 549 ms videoconvert3 CPU: 8.5%
11.2 MiB Time: 2.22 s GstCapsFilter
504 B ( 0.6%) ( 0.2%) ( 0.1%) ( 0.6%)
1.48 KiB ( 2.5%) capsfilter3
11.0 MiB CPU: 0.6% CPU: 0.2% CPU: 0.1% CPU: 0.6%
CPU: 2.6% Time: 35.7 ms
11.0 MiB ( 0.0%) 2.89 GiB
7.53 GiB 7.53 GiB 7.53 GiB CPU: 0.0% 1.77 GiB
1.48 GiB GstQueue GstFPSDisplaySink
2.95 GiB
2.95 GiB comp_q_3 hailo_display
GstFakeSrc GstUDPSink 2.29 GiB
Time: 19.4 ms Time: 0 ns
fakesrc2 udpsink4 ( 0.0%)
Time: 0 ns Time: 168 us ( 0.0%)
CPU: 0.0% CPU: 0.0%
( 0.0%) ( 0.0%) 2.95 GiB
CPU: 0.0% CPU: 0.0% GstVaapiDecodeBin GstQueue GstHailoSend GstQueue GstHailorecv GstQueue GstHailoFilter GstQueue GstHailoFilter
vaapidecodebin4 hailo_pre_infer_q_0 hailosend0 hailo_infer_q_0 hailorecv0 hailo_postprocess0 hailofilter0 hailo_draw0 hailofilter1 GstVaapiSink
Time: 0 ns Time: 7.27 s Time: 20.3 s Time: 849 ms Time: 7.63 s Time: 491 ms Time: 4.79 s Time: 362 ms Time: 1.65 s vaapisink0
GstCapsFilter 2.73 GiB 2.05 GiB
1000 B ( 0.0%) GstVideoConvert ( 8.2%) ( 22.8%) ( 1.0%) ( 8.6%) ( 0.6%) ( 5.4%) ( 0.4%) ( 1.9%) Time: 1.90 s
videoconvert4 capsfilter4 ( 2.1%)
CPU: 0.0% CPU: 8.4% CPU: 23.5% CPU: 1.0% CPU: 8.8% CPU: 0.6% CPU: 5.6% CPU: 0.4% CPU: 1.9%
Time: 2.17 s Time: 33.3 ms CPU: 2.2%
( 2.4%) ( 0.0%) GstQueue
GstVaapiDecode GstQueue GstCapsFilter GstVaapiPostproc 21.6 GiB 18.4 GiB 18.4 GiB 18.4 GiB 18.4 GiB 18.4 GiB 18.4 GiB 18.4 GiB 18.4 GiB comp_q_1
CPU: 2.5% CPU: 0.0%
GstRTSPSrc vaapidecode4 vaapi-queue capsfilter12 vaapipostproc4 Time: 17.2 ms 13.1 GiB
source_4 Time: 534 ms Time: 141 ms Time: 107 ms Time: 531 ms 2.95 GiB ( 0.0%)
Time: 0 ns ( 0.6%) ( 0.2%) ( 0.1%) ( 0.6%) 2.95 GiB 2.49 GiB CPU: 0.0% 1.92 GiB
( 0.0%) GstQueue CPU: 0.6% CPU: 0.2% CPU: 0.1% CPU: 0.6%
hailo_preprocess_q_4 1.48 GiB
CPU: 0.0%
Time: 244 ms 7.53 GiB 7.53 GiB 7.53 GiB
GstRtpBin ( 0.3%)
manager CPU: 0.3% 5.88 MiB 2.73 GiB 2.05 GiB
2.95 GiB
Time: 0 ns
( 0.0%) GstQueue
GstUDPSrc GstCapsFilter comp_q_0
CPU: 0.0%
udpsrc11 capsfilter5 Time: 19.6 ms
GstRtpH264Depay
Time: 66.5 ms 5.88 MiB Time: 33.0 ms ( 0.0%) 1.34 GiB
GstRtpSession GstRtpStorage GstRtpSsrcDemux GstRtpJitterBuffer GstRtpPtDemux rtph264depay4 1.61 GiB
( 0.1%) ( 0.0%) 1.97 GiB CPU: 0.0%
rtpsession4 rtpstorage4 rtpssrcdemux4 rtpjitterbuffer4 rtpptdemux4 Time: 165 ms
CPU: 0.1% CPU: 0.0%
Time: 159 ms Time: 73.5 ms Time: 223 ms Time: 92.8 ms Time: 91.5 ms ( 0.2%)
( 0.2%) ( 0.1%) ( 0.3%) ( 0.1%) ( 0.1%) CPU: 0.2%
CPU: 0.2% CPU: 0.1% CPU: 0.3% CPU: 0.1% CPU: 0.1%
GstVideoConvert
5.97 MiB 5.97 MiB 5.97 MiB 5.97 MiB 5.97 MiB videoconvert5 2.95 GiB
5.97 MiB GstQueue
Time: 2.08 s
GstUDPSrc GstVaapiDecodeBin ( 2.3%) comp_q_2 1.76 GiB
udpsrc14 vaapidecodebin5 CPU: 2.4% Time: 15.4 ms
2.27 GiB
Time: 0 ns 1.56 KiB 504 B Time: 0 ns ( 0.0%)
( 0.0%) ( 0.0%) CPU: 0.0%
CPU: 0.0% 504 B CPU: 0.0%
504 B
GstVaapiDecode GstQueue GstCapsFilter GstVaapiPostproc
vaapidecode5 vaapi-queue capsfilter13 vaapipostproc5
Time: 470 ms Time: 139 ms Time: 104 ms Time: 500 ms GstCapsFilter 1.97 GiB
GstQueue 1.48 GiB capsfilter6
hailo_preprocess_q_5 ( 0.5%) ( 0.2%) ( 0.1%) ( 0.6%)
Time: 24.8 ms GstQueue
Time: 238 ms CPU: 0.5% CPU: 0.2% CPU: 0.1% CPU: 0.6%
GstFakeSrc GstUDPSink ( 0.0%) comp_q_7
fakesrc4 udpsink8 ( 0.3%) CPU: 0.0% Time: 16.0 ms
Time: 0 ns Time: 330 us CPU: 0.3% 7.53 GiB 7.53 GiB 7.53 GiB ( 0.0%)
( 0.0%) ( 0.0%) 11.5 MiB CPU: 0.0%
CPU: 0.0% CPU: 0.0%
GstRtpBin
manager
Time: 0 ns
( 0.0%)
GstUDPSrc CPU: 0.0%
udpsrc15
Time: 64.7 ms GstRtpSession GstCapsFilter
( 0.1%) rtpsession6 GstRtpPtDemux capsfilter7
CPU: 0.1% Time: 175 ms rtpptdemux5 Time: 25.2 ms
( 0.2%) 1.41 KiB Time: 111 ms GstVaapiDecodeBin ( 0.0%)
CPU: 0.2% GstRtpStorage GstRtpSsrcDemux GstRtpJitterBuffer ( 0.1%) vaapidecodebin6 CPU: 0.0%
rtpstorage6 rtpssrcdemux6 rtpjitterbuffer5 CPU: 0.1% Time: 0 ns
Time: 86.3 ms Time: 238 ms Time: 105 ms 11.6 MiB
( 0.0%)
11.6 MiB ( 0.1%) ( 0.3%) ( 0.1%) CPU: 0.0%
GstUDPSrc CPU: 0.1% CPU: 0.3% CPU: 0.1%
11.6 MiB
udpsrc16 GstVaapiDecode GstQueue GstCapsFilter GstVaapiPostproc
Time: 0 ns 11.6 MiB 504 B vaapidecode6 vaapi-queue capsfilter14 vaapipostproc6
( 0.0%) 11.6 MiB
Time: 353 ms Time: 129 ms Time: 96.6 ms Time: 383 ms
CPU: 0.0% ( 0.4%) ( 0.1%) ( 0.1%) ( 0.4%)
504 B CPU: 0.4% CPU: 0.1% CPU: 0.1% CPU: 0.4% 0.98 GiB
504 B 11.6 MiB
1.97 GiB
12.7 GiB 12.7 GiB 12.7 GiB
GstFakeSrc GstUDPSink
fakesrc6 udpsink12
Time: 0 ns Time: 252 us 3.42 MiB
( 0.0%) ( 0.0%)
CPU: 0.0% CPU: 0.0%
1000 B GstQueue
hailo_preprocess_q_6
Time: 231 ms
( 0.3%)
CPU: 0.3%
GstRTSPSrc
source_6
Time: 0 ns GstVideoConvert
( 0.0%) videoconvert7
CPU: 0.0% Time: 1.54 s
( 1.7%)
GstUDPSink CPU: 1.8%
udpsink11 3.42 MiB
Time: 1.35 ms
( 0.0%)
CPU: 0.0%
GstRtpH264Depay
rtph264depay6
Time: 105 ms
GstRtpBin ( 0.1%)
manager CPU: 0.1%
Time: 0 ns GstVaapiDecodeBin
1.48 KiB ( 0.0%) vaapidecodebin7
GstUDPSrc
CPU: 0.0% Time: 0 ns
udpsrc10
Time: 65.7 ms ( 0.0%)
GstRtpSession CPU: 0.0%
( 0.1%)
rtpsession5 GstRtpPtDemux
CPU: 0.1%
Time: 105 ms rtpptdemux7 GstVaapiDecode GstQueue GstCapsFilter GstVaapiPostproc
( 0.1%) Time: 58.5 ms vaapidecode7 vaapi-queue capsfilter15 vaapipostproc7
CPU: 0.1% GstRtpStorage GstRtpSsrcDemux GstRtpJitterBuffer ( 0.1%) Time: 361 ms Time: 130 ms Time: 96.5 ms Time: 400 ms
rtpstorage5 rtpssrcdemux5 rtpjitterbuffer7 CPU: 0.1% ( 0.4%) ( 0.1%) ( 0.1%) ( 0.5%)
Time: 46.0 ms Time: 182 ms Time: 62.4 ms 3.47 MiB 0.98 GiB
CPU: 0.4% CPU: 0.2% CPU: 0.1% CPU: 0.5%
3.47 MiB ( 0.1%) ( 0.2%) ( 0.1%)
GstUDPSrc CPU: 0.1% CPU: 0.2% CPU: 0.1%
3.47 MiB 12.7 GiB 12.7 GiB 12.7 GiB
udpsrc13
Time: 0 ns 3.47 MiB 3.47 MiB 504 B
( 0.0%)
CPU: 0.0%
504 B
504 B 3.47 MiB
16.3 MiB
GstFakeSrc GstUDPSink
fakesrc5 udpsink10
Time: 0 ns Time: 282 us
( 0.0%) ( 0.0%)
CPU: 0.0% CPU: 0.0%
1000 B
GstQueue
GstRtpH264Depay hailo_preprocess_q_7
GstRTSPSrc rtph264depay7 Time: 227 ms
source_7 Time: 202 ms ( 0.3%)
Time: 0 ns ( 0.2%) CPU: 0.3%
( 0.0%) CPU: 0.2%
CPU: 0.0%
16.3 MiB
GstFakeSrc GstUDPSink GstUDPSink
fakesrc3 udpsink6 udpsink7
Time: 0 ns Time: 872 us Time: 1.54 ms
( 0.0%) ( 0.0%) ( 0.0%)
CPU: 0.0% CPU: 0.0% CPU: 0.0%
1000 B
GstRtpBin
manager
Time: 0 ns
1.48 KiB ( 0.0%)
GstUDPSrc CPU: 0.0%
udpsrc8
Time: 64.1 ms GstRtpSession
( 0.1%) rtpsession3
CPU: 0.1% Time: 193 ms
( 0.2%) GstRtpPtDemux
CPU: 0.2% GstRtpStorage GstRtpSsrcDemux GstRtpJitterBuffer rtpptdemux3
rtpstorage3 rtpssrcdemux3 rtpjitterbuffer3 Time: 128 ms
Time: 93.8 ms Time: 244 ms Time: 116 ms ( 0.1%)
16.5 MiB CPU: 0.1%
( 0.1%) ( 0.3%) ( 0.1%)
CPU: 0.1% CPU: 0.3% CPU: 0.1% 16.5 MiB
GstUDPSrc
udpsrc12 16.5 MiB
Time: 0 ns 16.5 MiB 16.5 MiB 504 B
( 0.0%)
CPU: 0.0%
504 B 504 B 16.5 MiB
Overview:
hailo_pose_estimation.sh demonstrates human pose estimation on one video file
source and verifies Hailo’s configuration. This is done by running a single-stream
pose estimation pipeline on top of GStreamer using the Hailo-8 device.
Options
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/pose_estimation
./hailo_pose_estimation.sh
How it works
This section is optional and provides a drill-down into the implementation of the
pose_estimation app with a focus on explaining the GStreamer pipeline. This section
30/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
uses centerpose_regnetx_1.6gf_fpn as an example network so network input width,
height, and hef name are set accordingly.
gst-launch-1.0 \
filesrc location=$video_device ! decodebin ! videoconvert ! \
videoscale ! video/x-raw,width=640,height=640,pixel-aspect-
ratio=1/1 ! \
queue queue leaky=no max-size-buffers=30 max-size-bytes=0 max-
size-time=0 ! \
hailonet hef-path=$hef_path debug=False is-active=true qos=false
batch-size=8 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter function-name=yolov5 so-path=$POSTPROCESS_SO
qos=false debug=False ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$DRAW_POSTPROCESS_SO qos=false debug=False !
\
videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}
Specifies the location of the video used, then decodes and converts to the
required format.
2. videoscale ! video/x-raw,pixel-aspect-ratio=1/1 ! \
Re-scale the video dimensions to fit the input of the network. In this case it is
rescaling the video to 640x640 with the caps negotiation of hailonet.
Before sending the frames into the hailonet element, set a queue so no frames
are lost (Read more about queues here)
Each hailofilter performs a given post-process. In this case the first performs
the centerpose post-process and the second performs box and skeleton drawing.
6. videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
Segmentation Pipelines
Overview
semantic_segmentation.sh demonstrates semantic segmentation on one video file
source. This is done by running a single-stream object semantic segmentation
pipeline on top of GStreamer using the Hailo-8 device.
Options
semantic_segmentation.sh demonstrates semantic segmentation on one video file
source. This is done by running a single-stream object semantic segmentation
pipeline on top of GStreamer using the Hailo-8 device.
Options
Supported Network
'fcn8_resnet_v1_18' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/fcn8_resnet_v1_18
.yaml
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/segmentation
./semantic_segmentation.sh
How it works
This section is optional and provides a drill-down into the implementation of the
semantic segmentation app with a focus on explaining the GStreamer pipeline. This
section uses resnet18_fcn8_fhd as an example network so network input width,
height, hef name, are set accordingly.
Model
fcn8_resnet_v1_18 in resolution of 1920x1024x3.
Numeric accuracy 65.18mIOU.
Pre trained on cityscapes using GlounCV and a resnet-18- FCN8 architecture.
gst-launch-1.0 \
filesrc location=$video_device ! decodebin ! \
videoscale ! video/x-raw,pixel-aspect-ratio=1/1 ! videoconvert !
\
queue leaky=no max-size-buffers=13 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path debug=False is-active=true qos=false
batch-size=8 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$DRAW_POSTPROCESS_SO qos=false debug=False !
\
videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}
2. videoscale ! video/x-raw,pixel-aspect-ratio=1/1 ! \
Re-scales the video dimensions to fit the input of the network. In this case it is
rescaling the video to 1920x1024 with the caps negotiation of hailonet.
Before sending the frames into hailonet element, set a queue so no frames are
lost (Read more about queues here)
6. videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
Overview:
facial_landmarks.sh demonstrates facial landmarking on one video file source and
verifies Hailo’s configuration. This is done by running a single-stream facial
landmarking pipeline on top of GStreamer using the Hailo-8 device.
Options
./facial_landmarks.sh
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/facial_landmarks/
./facial_landmarks.sh
How it works
This section is optional and provides a drill-down into the implementation of the face
landmarks app with a focus on explaining the GStreamer pipeline. This setction uses
tddfa_mobilenet_v1 as an example network so network input width, height, hef
name, are set accordingly.
gst-launch-1.0 \
$source_element ! decodebin ! \
videoscale ! video/x-raw,pixel-aspect-ratio=1/1 ! videoconvert !
\
queue leaky=no max_size_buffers=30 max-size-bytes=0 max-size-
Specifies the location of the video used, then decodes and converts to the
required format.
2. videoscale ! video/x-raw,pixel-aspect-ratio=1/1 ! \
Re-scales the video dimensions to fit the input of the network. In this case it is
rescaling the video to 120x120 with the caps negotiation of hailonet.
)Before sending the frames into hailonet element, set a queue so no frame are
lost (Read more about queues here
6. videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element.
Overview:
The purpose of face_detection.sh is to demostrate face detection on one video file
source and to verify Hailo’s configuration. This is done by running a single-stream
face detection pipeline on top of GStreamer using the Hailo-8 device.
Options
/face_detection.sh
--netowrk is a flag that sets which network to use. choose from [lightface,
retinaface], default is lightface. this will set the hef file to use, the hailofilter
function to use, and the scales of the frame to match the width/height input
dimensions of the network.
--input is an optional flag, a path to the video displayed (default is
face_detection.mp4).
--show-fps is an optional flag that enables printing FPS on screen.
--print-gst-launch is a flag that prints the ready gst-launch command without
running it"
Supported Networks
'retinaface' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/retinaface_mobile
net_v1.yaml
'lightface' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/lightface_slim.yam
l
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/face_detection/
./face_detection.sh
How it works
This section is optional and provides a drill-down into the implementation of the face
detection app with a focus on explaining the GStreamer pipeline. This setction uses
lightface_slim as an example network so network input width, height, hef name, are
set accordingly.
gst-launch-1.0 \
filesrc location=$input_source ! decodebin ! \
videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert !
\
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path debug=False is-active=true qos=false
! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter function-name=$network_name so-path=$POSTPROCESS_SO
qos=false debug=False ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$DRAW_POSTPROCESS_SO qos=false debug=False !
\
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=false text-overlay=false ${additonal_parameters}
Specifying the location of the video used, then decode and convert to the
required format.
Re-scale the video dimensions to fit the input of the network. In this case it is
rescaling the video to 320x240 with the caps negotiation of hailonet. #
Before sending the frames into hailonet element set a queue so no frame would
be lost (Read more about queue here)
NOTE: qos must be disabled for hailonet since dropping frames may cause
these elements to run out of alignment.
6. videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true
Apply the final convert to let GStreamer find out the format required by the
fpsdisplaysink element
Options
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/cascading_networks
./face_detection_and_landmarks.sh
Model
lightface_slim in resolution of 320X240X3 - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/lightface_slim.yam
l.
tddfa_mobilenet_v1 in resolution of 120X120X3 - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/tddfa_mobilenet_v
1.yaml.
How it works
42/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
This section is optional and provides a drill-down into the implementation of the app
with a focus on explaining the `GStreamer` pipeline. This section uses `lightface_slim`
as an example network so network input width, height, hef name, are set accordingly.
FACE_DETECTION_PIPELINE="videoscale qos=false ! \
queue leaky=no max-size-buffers=3 max-size-bytes=0 max-size-
time=0 ! \
hailonet net-
name=joined_lightface_slim_tddfa_mobilenet_v1/lightface_slim \
hef-path=$hef_path is-active=true qos=false ! \
queue leaky=no max-size-buffers=3 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$detection_postprocess_so function-
name=lightface qos=false ! \
queue leaky=no max-size-buffers=3 max-size-bytes=0 max-size-
time=0"
gst-launch-1.0 \
$source_element ! \
tee name=t hailomuxer name=hmux \
t. ! queue leaky=no max-size-buffers=3 max-size-bytes=0 max-size-
time=0 ! hmux. \
t. ! $FACE_DETECTION_PIPELINE ! hmux. \
hmux. ! queue leaky=no max-size-buffers=3 max-size-bytes=0 max-
size-time=0 ! \
hailocropper internal-offset=$internal_offset name=cropper
hailoaggregator name=agg \
cropper. ! queue leaky=no max-size-buffers=3 max-size-bytes=0
max-size-time=0 ! agg. \
43/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
cropper. ! $FACIAL_LANDMARKS_PIPELINE ! agg. \
agg. ! queue leaky=no max-size-buffers=3 max-size-bytes=0 max-
size-time=0 ! \
hailofilter so-path=$landmarks_draw_so qos=false ! \
queue leaky=no max-size-buffers=3 max-size-bytes=0 max-size-
time=0 ! videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=false text-overlay=false ${additonal_parameters}
Specifies the location of the video used, then decodes and converts to the
required format.
Split into two threads - one for doing face detection, the other one for getting the
original frame. We merge those 2 threads back by using hailomuxer, which takes
the frame from it's first sink and adds the metadata from the other sink
4. t. ! $FACE_DETECTION_PIPELINE ! hmux. \
videoscale qos=false ! \
Scales the picture to a resolution negotiated with the hailonet down the
pipeline, according to the needed resolution by the hef file.
links the hailomuxer to a queue and defines the cascading network elements
hailocropper and hailoaggregator. hailocropper splits the pipeline into 2 threads,
the first thread passes the original frame, the other thread passes the croppes of
the original frame created by hailocropper according to the detections added to
the buffer by prior hailofilter post-processing, the buffers are also scaled to the
following hailonet, done by caps negotiation. The hailoaggregator gets the
original frame and then knows to wait for all related cropped buffers and add all
related metadata on the original frame, and send it forward.
The first part of the cascading network pipeline, passes the original frame on the
bypass pads to hailoaggregator.
The second part of the cascading network pipeline, performs a second network
on all detections, which are cropped and scaled to the needed resolution by the
HEF in the hailonet. FACIAL_LANDMARKS_PIPELINE consists of:
Before sending the frames into the hailonet element, set a queue so no
frames are lost (Read more about queues here)
Aggregates all detected faces with thier landmarks on the original frame, and
draws them over the frame using the hailofilter with specific drawing function.
Tiling Pipeline
hailotilecropper which splits the frame into tiles, by seperating the frame into
rows and columns (given as parameters to the element).
hailonet which performs an inference on each frame on the Hailo8 device.
hailofilter which performs the postprocess - parses the tensor output to
detections.
hailotileaggregator which aggregates the cropped tiles and stitches them
back to the original resolution.
Model
ssd_mobilenet_v1_visdrone in resolution of 300X300 - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/ssd_mobilenet_v1_
visdrone.yaml.
The VisDrone dataset consists of only small objects which we can assume are always
confined within an single tile. As such it is better suited for running single-scale tiling
with little overlap and without additional filtering.
Options
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/tiling
./tiling.sh
How it works
filesrc - source of the pipeline reads the video file and decodes it.
TILE_CROPPER_ELEMENT="hailotilecropper internal-
offset=$internal_offset name=cropper \
tiles-along-x-axis=$tiles_along_x_axis tiles-along-y-
axis=$tiles_along_y_axis overlap-x-axis=$overlap_x_axis overlap-y-
axis=$overlap_y_axis"
hailotilecropper splits the pipeline into 2 threads, the first thread passes the original
frame, the other thread passes the crops of the original frame created by
hailotilecropper according to given tiles number per x/y axis and overlap
parameters. The buffers are also scaled to the following hailonet, done by caps
negotiation. The hailotileaggregator gets the original frame and then knows to wait
49/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
for all related cropped buffers and add all related metadata on the original frame,
sending everything together once aggregated. It also performs NMS process to merge
detections on overlap tiles.
DETECTION_PIPELINE="\
hailonet hef-path=$hef_path device-id=$hailo_bus_id is-active=true
qos=false batch-size=1 ! \
queue leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0 !
\
hailofilter2 function-name=$postprocess_func_name so-
path=$detection_postprocess_so qos=false ! \
queue leaky=no max-size-buffers=3 max-size-bytes=0 max-size-time=0"
focusing on the detection part: hailonet performs inference on the Hailo-8 device
running mobilenet_v1_visdrone.hef for each tile crop. hailofilter performs the
mobilenet postprocess and creates the detection objects to pass through the pipeline.
hailotileaggregator sends the frame forward into the hailooverlay which draws
the detections over the frame.
Multi-scale tiling strategy also allows us to filter the correct detection over several
scales. For example we use 3 sets of tiles at 3 different scales:
In this mode we use 1 + 4 + 9 = 14 tiles for each frame. We can simplify the process
by highliting the main tasks: crop -> inference -> post-process -> aggregate →
remove exceeded boxes → remove large landscape → perform NMS
Model
Options
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/tiling
./multi_scale_tiling.sh
How it works
As multi scale tiling is almost equal to single scale i will mention the differences:
TILE_CROPPER_ELEMENT="hailotilecropper internal-
offset=$internal_offset name=cropper tiling-mode=1 scale-
level=$scale_level
Classification Pipeline
Overview
The purpose of classification.sh is to demostrate classification on one video file
source. This is done by running a single-stream object classification pipeline
on top of GStreamer using the Hailo-8 device.
Options
Supported Networks:
'resnet_v1_50' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/resnet_v1_50.yaml
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/classification
./classification.sh
How it works
This section is optional and provides a drill-down into the implementation of the
classification app with a focus on explaining the GStreamer pipeline. This section
uses resnet_v1_50 as an example network so network input width, height, and hef
name are set accordingly.
53/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
gst-launch-1.0 \
filesrc location=$input_source ! decodebin ! videoconvert ! \
videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path debug=False is-active=true qos=false
! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$POSTPROCESS_SO qos=false debug=False ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailooverlay qos=false ! \
videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}"
Specifies the location of the video used, then decode and convert to the required
format.
2. videoscale ! video/x-raw,pixel-aspect-ratio=1/1 ! \
Re-scale the video dimensions to fit the input of the network. In this case it is
rescaling the video to 112X112 with the caps negotiation of hailonet. hailonet
Extracts the needed resolution from the HEF file during the caps negotiation, and
makes sure that the needed resolution is passed from previous elements.
Before sending the frames into hailonet element set a queue so no frames are
lost (Read more about queues here)
6. hailooverlay qos=false ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
Performs given draw-process, in that case, performs drawing the top1 class name
over the image.
7. videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}"
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element.
Overview
This GStreamer pipeline demonstrates object detection on 8 camera streams over
RTSP protocol. This pipeline also demostrates using two hailo8 devices in parallel.
All the streams are processed in parallel through the decode and scale phases, and
enter the Hailo devices frame by frame. Each hailo device is in charge of one inference
task (one for yolov5 and the other for centerpose)
Afterwards postprocess and drawing phases add the classified object and bounding
boxes to each frame.
The last step is to match each frame back to its respective stream and output all of
them to the display.
Real Time Streaming Protocol (RTSP) is a network control protocol designed for
use in entertainment and communications systems to control streaming media servers.
The protocol is used for establishing and controlling media sessions between endpoint.
Prerequisites
TensorPC
Ubuntu 18.04
RTSP Cameras, We recommend using: AXIS M10 Network Cameras
Two Hailo-8 devices connected via PCIe
Preparations
1. Before running, configuration of the RTSP camera sources is required. Open the
rtsp_detection_and_pose_estimation.sh in edit mode with your preffered
editor. Configure the eight sources to match your own cameras.
./rtsp_detection_and_pose_estimation.sh
3. --num-of-sources sets the number of rtsp sources to use by given input. The
default and recommended value in this pipeline is 8 sources.
4. --debug uses gst-top to print time and memory consuming elements, saves the
results as text and graph.
Open the pipeline_report.txt to view the full report showing all elements. Your
report should be similar to this:
Overview:
detection_and_depth_estimation_networks_switch demonstrates network switch
between two networks: Detection network and Depth estimation network on one video
source using one Hailo-8 device. The switch is done every frame, so all frames are
inferred by both networks. This is a C++ executable that runs a GStreamer application
with extra logic applied through probes
Options
Run
cd
$TAPPAS_WORKSPACE/apps/gstreamer/x86/network_switch/detection_and_dep
th_estimation_networks_switch
Pipeline diagram
NOTE: queue elements were not presented for clearness. Queue positions can be
observed here:
?
pipeline0
Time: 0 ns
( 0.0%)
CPU: 0.0%
GstHailoNet GstFPSDisplaySink
hailonet_2 hailo_display_2
Time: 0 ns Time: 0 ns
GstAspectRatioCrop ( 0.0%) ( 0.0%)
aspectratiocrop0 CPU: 0.0% CPU: 0.0%
Time: 0 ns
( 0.0%) GstQueue GstHailoSend GstQueue GstHailoRecv GstQueue GstHailoFilter GstQueue GstVideoConvert GstQueue GstXImageSink
CPU: 0.0% GstVideoScale GstQueue GstIdentity
queue9 hailosend hailo_infer_q_0 hailorecv queue10 hailofilter2 queue11 videoconvert2 hailo_display_q_1 ximagesink1
GstQueue videoscale1 queue8 identity_2 Time: 237 ms Time: 5.27 ms Time: 20.1 ms Time: 5.53 ms Time: 853 ms Time: 4.45 ms Time: 35.3 ms Time: 7.12 ms Time: 19.6 ms
GstDecodeBin Time: 131 ms Time: 3.48 ms Time: 3.36 ms Time: 6.24 ms
GstVideoCrop queue7 ( 0.2%) ( 6.2%) ( 0.1%) ( 0.5%) ( 0.1%) ( 22.3%) ( 0.1%) ( 0.9%) ( 0.2%) ( 0.5%)
decodebin0 ( 3.4%) ( 0.1%) ( 0.1%)
videocrop0 Time: 4.69 ms CPU: 0.1% CPU: 3.6% CPU: 0.1% CPU: 0.3% CPU: 0.1% CPU: 12.8% CPU: 0.1% CPU: 0.5% CPU: 0.1% CPU: 0.3%
Time: 0 ns Time: 99.8 ms ( 0.1%) CPU: 2.0% CPU: 0.1% CPU: 0.1%
( 0.0%) ( 2.6%) CPU: 0.1%
CPU: 0.0% GstTee 26.4 MiB 26.1 MiB 51.8 MiB 51.7 MiB 25.8 MiB 25.8 MiB 25.8 MiB 25.8 MiB 34.5 MiB 34.5 MiB
CPU: 1.5% 26.4 MiB 26.4 MiB
t 273 MiB
GstFileSrc GstTypeFindElement GstQTDemux GstMultiQueue GstH264Parse GstCapsFilter avdec_h264 GstVideoConvert GstQueue Time: 434 ms 273 MiB
src_0 typefind qtdemux0 multiqueue0 h264parse0 capsfilter0 avdec_h264-0 videoconvert0 queue0 ( 11.3%)
Time: 8.40 ms Time: 2.00 ms Time: 10.7 ms Time: 5.01 ms Time: 15.2 ms Time: 45.7 ms Time: 150 ms Time: 558 ms Time: 5.57 ms CPU: 6.5% 485 MiB GstHailoNet GstFPSDisplaySink
( 0.2%) ( 0.1%) ( 0.3%) ( 0.1%) ( 0.4%) ( 1.2%) ( 3.9%) ( 14.6%) ( 0.1%) hailonet_1 hailo_display_1
CPU: 0.1% CPU: 0.0% CPU: 0.2% CPU: 0.1% CPU: 0.2% CPU: 0.7% CPU: 2.3% CPU: 8.4% CPU: 0.1% Time: 0 ns Time: 0 ns
( 0.0%) ( 0.0%)
3.94 MiB 3.94 MiB 3.91 MiB 3.82 MiB 3.82 MiB 3.82 MiB 291 MiB 493 MiB 488 MiB CPU: 0.0% CPU: 0.0%
GstVideoScale GstQueue GstIdentity GstQueue GstHailoSend GstQueue GstHailoRecv GstQueue GstHailoFilter GstQueue GstHailoFilter GstQueue GstVideoConvert GstQueue GstXImageSink
videoscale0 queue1 identity_1 queue2 hailosend hailo_infer_q_0 hailorecv queue3 hailofilter0 queue4 hailofilter1 queue5 videoconvert1 queue6 ximagesink0
488 MiB Time: 419 ms Time: 6.82 ms Time: 3.34 ms Time: 5.77 ms Time: 255 ms Time: 4.82 ms Time: 43.6 ms Time: 5.66 ms Time: 82.5 ms Time: 3.19 ms Time: 69.2 ms Time: 3.43 ms Time: 231 ms Time: 6.43 ms Time: 19.2 ms
( 10.9%) ( 0.2%) ( 0.1%) ( 0.2%) ( 6.6%) ( 0.1%) ( 1.1%) ( 0.1%) ( 2.2%) ( 0.1%) ( 1.8%) ( 0.1%) ( 6.0%) ( 0.2%) ( 0.5%)
CPU: 6.3% CPU: 0.1% CPU: 0.1% CPU: 0.1% CPU: 3.8% CPU: 0.1% CPU: 0.7% CPU: 0.1% CPU: 1.2% CPU: 0.0% CPU: 1.0% CPU: 0.1% CPU: 3.5% CPU: 0.1% CPU: 0.3%
217 MiB 214 MiB 214 MiB 212 MiB 421 MiB 421 MiB 211 MiB 211 MiB 211 MiB 211 MiB 211 MiB 211 MiB 281 MiB 281 MiB
Models
YOLOv5 is a modern object detection architecture that is based on the YOLOv3 meta-
architecture with CSPNet backbone. The YOLOv5 was released on 05/2020 with a very
efficient design and State of the art accuracy results on the COCO benchmark.
In this pipeline, using a specific variant of the YOLOv5 architecture - yolov5m that
stands for medium sized networks.
Centerpose
vaapidecodebin video decoding and scaling - this element uses vaapi hardware
acceleration to improve the pipeline performace In this pipeline, the bin is
responsible for decoding h264 format and scaling the frame to 640X640. It
contains the following elements:
tee duplicates the incomming frame and passes it into two different streams - to
perform two different inferences on different chips.
hailonet Performs the inference on the Hailo-8 device - configures the chip with
the hef and starts Hailo's inference process - sets streaming mode and sends the
buffers into the chip. Requires the following properties: hef-path - points to the
compiled yolov5m hef, qos must be set to false to disable frame drops.
fpsdisplaysink outputs video into the screen, and displays the current and
average framerate.
Classification
The purpose of classification.sh is to demostrate classification on one video file
source with python post-processing This is done by running a single-stream object
classification pipeline on top of GStreamer using the Hailo-8 device.
Options
Supported Networks:
'resnet_v1_50' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/resnet_v1_50.yaml
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/classification
./classification.sh
How it works
This section is optional and provides a drill-down into the implementation of the
classification app with a focus on explaining the GStreamer pipeline. This section
gst-launch-1.0 \
filesrc location=$input_source ! decodebin ! videoconvert ! \
videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path debug=False is-active=true qos=false
! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailopython module=$POSTPROCESS_MODULE qos=false ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailooverlay qos=false ! \
videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}"
Specifies the location of the video used, then decode and convert to the required
format.
2. videoscale ! video/x-raw,pixel-aspect-ratio=1/1 ! \
Re-scale the video dimensions to fit the input of the network. In this case it is
rescaling the video to 112X112 with the caps negotiation of hailonet. hailonet
Extracts the needed resolution from the HEF file during the caps negotiation, and
makes sure that the needed resolution is passed from previous elements.
Before sending the frames into hailonet element set a queue so no frames are
lost (Read more about queues here)
6. hailooverlay qos=False ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
Performs drawing on the original image, in that case, performs drawing the top1
class name over the image.
7. videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}"
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element.
Options
Supported Networks:
'yolox_l' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/yolox_l_leaky.yaml
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/x86/century
./century.sh
How it works
This section is optional and provides a drill-down into the implementation of the
century app with a focus on explaining the GStreamer pipeline. This section uses
yolox as an example network so network input width, height, and hef name are set
accordingly.
gst-launch-1.0 \
filesrc location=$video_device ! decodebin ! videoconvert ! \
videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path device-count=$device_count is-
active=true ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter function-name=yolox so-path=$POSTPROCESS_SO qos=false
! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailooverlay qos=false !
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=false text-overlay=false ${additonal_parameters}
Specifies the location of the video used, then decodes and converts to the
required format.
2. videoscale ! video/x-raw,pixel-aspect-ratio=1/1 ! \
Re-scale the video dimensions to fit the input of the network. In this case it is
rescaling the video to 640x640 with the caps negotiation of hailonet.
Before sending the frames into the hailonet element, set a queue so no frames
are lost (Read more about queues here)
Performs the inference on the Hailo-8 device via $device_count devices, which is
set to 4 in this app.
hailofilter performs a given post-process. In this case the performs the YoloX
post-process.
6. hailooverlay qos=false ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
7. videoconvert ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=false text-overlay=false ${additonal_parameters}
Overview
Our requirement from this pipeline is a real-time high-accuracy object detection to run
on a single video stream using an embedded host. The required input video resolution
was HD (high definition, 720p).
The chosen platform for this project is based on NXP’s i.MX8M Arm processor. The
Hailo-8TM AI processor is connected to it as an AI accelerator.
Drill Down
Although the i.MX8M is a capable host, processing and decoding real-time HD video is
bound to utilize a lot of the CPU’s resources, which may eventually reduce
performance. To solve this problem, most of the vision pre-processing pipeline has
been offloaded to the Hailo-8 device in our application.
The camera sends the raw video stream, encoded in YUV color format using the YUY2
layout. The data passes through Hailo’s runtime software library, called HailoRT, and
through Hailo’s PCIe driver. The data’s format is kept unmodified, and it is sent to the
Hailo-8 device as is.
Hailo-8’s NN core handles the data pre-processing, which includes decoding the YUY2
scheme, converting from the YUV color space to RGB and, finally, resizing the frames
into the resolution expected by the deep learning detection model.
The Hailo Dataflow Compiler supports adding these pre-processing stages to any
model when compiling it. In this case, they are added before the YOLOv5m detection
model.
Options
Run
cd $TAPPAS_WORKSPACE/arm/apps/detection
./detection.sh
How it works
This section is optional and provides a drill-down into the implementation of the
detection app with a focus on explaining the GStreamer pipeline. This section uses
yolov5 as an example network so network input width, height, and hef name are set
accordingly.
gst-launch-1.0 \
v4l2src device=$input_source ! video/x-
raw,format=YUY2,width=1280,height=720,framerate=30/1 ! \
queue leaky=downstream max-size-buffers=5 max-size-bytes=0 max-
size-time=0 ! \
hailonet hef-path=$hef_path debug=False is-active=true qos=false
batch-size=1 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter function-name=$network_name so-path=$postprocess_so
qos=false debug=False ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$draw_so qos=false debug=False ! \
queue leaky=downstream max-size-buffers=5 max-size-bytes=0 max-
size-time=0 ! \
videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=false text-overlay=false ${additonal_parameters}
Specifies the path of the camera, specify the required format and resolution.
Before sending the frames into the hailonet element, set a queue to leaky (Read
more about queues here)
Each hailofilter performs a given post-process. In this case the first performs
the Yolov5m post-process and the second performs box drawing. Then set a leaky
queue to let the sink drop frames.
5. videoconvert ! \
fpsdisplaysink video-sink=xvimagesink name=hailo_display
sync=true text-overlay=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
Links
hailofilter
Blog post about this setup
1. Sanity Pipeline - Helps you verify that all the required components are installed
correctly
2. Detection - single-stream object detection pipeline on top of GStreamer using the
Hailo-8 device.
3. Depth Estimation - single-stream depth estimation pipeline on top of GStreamer
using the Hailo-8 device.
4. Multinetworks parallel - single-stream multi-networks pipeline on top of
GStreamer using the Hailo-8 device.
5. Pose Estimation - Human pose estimation using centerpose network.
6. Face Detection - Face detection application.
7. Classification - Classification app using resnet_v1_50 network.
Overview
Sanity apps purpose is to help you verify that all the required components have been
installed successfully.
First of all, you would need to run sanity_gstreamer.sh and make sure that the image
presented looks like the one that would be presented later.
Sanity GStreamer
This app should launch first.
NOTE: Open the source code in your preferred editor to see how simple this app
is.
In order to run the app just cd to the sanity_pipeline directory and launch the app
cd $TAPPAS_WORKSPACE/apps/gstreamer/raspberrypi/sanity_pipeline
./sanity_gstreamer.sh
If the output is similar to the image shown above, you are good to go to the next
verification phase!
Overview:
detection.sh demonstrates detection on one video file source and verifies Hailo’s
configuration. This is done by running a single-stream object detection pipeline
on top of GStreamer using the Hailo-8 device.
Options
Supported Networks:
'yolov5' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/yolov5m.yaml
'mobilenet_ssd' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/ssd_mobilenet_v1.
yaml
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/raspberrypi/detection
./detection.sh
How it works
This section is optional and provides a drill-down into the implementation of the
detection app with a focus on explaining the GStreamer pipeline. This section uses
yolov5 as an example network so network input width, height, and hef name are set
accordingly.
gst-launch-1.0 \
gst-launch-1.0 ${stats_element} \
filesrc location=$input_source name=src_0 ! qtdemux ! h264parse !
avdec_h264 ! \
videoscale n-threads=8 ! video/x-raw, pixel-aspect-ratio=1/1 !
videoconvert n-threads=8 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path device-id=$hailo_bus_id debug=False
is-active=true qos=false batch-size=$batch_size ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter2 function-name=$network_name so-path=$postprocess_so
qos=false ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailooverlay ! \
videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=$sync_pipeline text-overlay=false ${additonal_parameters}"
Re-scale the video dimensions to fit the input of the network. In this case it is
rescaling the video to 640x640 with the caps negotiation of hailonet. Then
convert it to the required format.
4. queue ! \
Before sending the frames into the hailonet element, set a queue so no frames
are lost (Read more about queues here)
7. hailooverlay ! \
8. videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
83/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
sync=$sync_pipeline text-overlay=false ${additonal_parameters}"
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
Depth Estimation
depth_estimation.sh demonstrates depth estimation on one video file source. This is
done by running a single-stream object depth estimation pipeline on top of
GStreamer using the Hailo-8 device.
Options
Run
cd
/local/workspace/tappas/apps/gstreamer/raspberrypi/depth_estimation
./depth_estimation.sh
Model
fast_depth in resolution of 224X224X3.
How it works
This section is optional and provides a drill-down into the implementation of the depth
estimation app with a focus on explaining the GStreamer pipeline. This section uses
fast_depth as an example network so network input width, height, hef name, are set
accordingly.
gst-launch-1.0 \
filesrc location=$input_source name=src_0 ! qtdemux ! h264parse !
avdec_h264 ! queue ! videoconvert n-threads=8 ! queue ! \
tee name=t ! queue leaky=no max-size-buffers=30 max-size-bytes=0
max-size-time=0 ! \
aspectratiocrop aspect-ratio=1/1 ! queue ! videoscale ! queue ! \
hailonet hef-path=$hef_path device-id=$hailo_bus_id debug=False
is-active=true qos=false batch-size=1 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$draw_so qos=false debug=False ! videoconvert
n-threads=8 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
videoconvert ! fpsdisplaysink video-sink=ximagesink
name=hailo_display sync=false text-overlay=false \
t. ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-
size-time=0 ! \
videoscale ! video/x-raw, width=300, height=300 ! queue !
videoconvert n-threads=8 ! \
ximagesink sync=false ${additonal_parameters}
Specifies the location of the video used, then decodes and converts to the
required format using 8 threads for acceleration.
2. tee name=t
The beginning of the first split of the tee. The network used expects no borders,
so a crop mechanism is needed.
Re-scales the video dimensions to fit the input of the network. In this case it is
cropping the video and rescaling the video to 224x224 with the caps negotiation
of hailonet.
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
The beginning of the second split of the tee. Re-scales the video dimensions.
8. videoconvert n-threads=8 ! \
ximagesink sync=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
ximagesink element
Options
Run
cd
$TAPPAS_WORKSPACE/apps/gstreamer/raspberrypi/multinetworks_parallel/
./detection_and_depth_estimation.sh
Model
fast_depth in resolution of 224X224X3.
mobilenet_ssd in resolution of 300X300X3.
How it works
This section is optional and provides a drill-down into the implementation of the app
with a focus on explaining the GStreamer pipeline. This section uses fast_depth as an
example network so network input width, height, hef name, are set accordingly.
gst-launch-1.0 \
$source_element ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
videoconvert n-threads=8 !
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
tee name=t ! \
aspectratiocrop aspect-ratio=1/1 ! \
queue ! videoscale n-threads=8 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path device-id=$hailo_bus_id debug=False
is-active=true net-name=$depth_estimation_net_name qos=false batch-
size=1 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$depth_estimation_draw_so qos=false
debug=False ! videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=false text-overlay=false \
t. ! \
videoscale n-threads=8 ! queue ! \
hailonet hef-path=$hef_path device-id=$hailo_bus_id debug=False
is-active=true net-name=$detection_net_name qos=false batch-size=1 !
\
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter2 so-path=$detection_post_so function-
name=mobilenet_ssd_merged qos=false ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailooverlay ! videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display2
sync=false text-overlay=false ${additonal_parameters} "
Before sending the frames into hailonet element, set a queue so no frames are
lost (Read more about queues [here]
(https://gstreamer.freedesktop.org/documentation/ coreelements/queue.html?gi-
language=c))
3. videoconvert n-threads=8 !
4. tee name=t !
Split into two threads - one for mobilenet_ssd and the other for fast_depth.
Re-scales the video dimensions to fit the input of the network using 8 threads for
acceleration. In this case it is cropping the video and rescaling the video to
224x224 with the caps negotiation of hailonet.
NOTE: We pre define the input and the output layers of each network, giving the
net-name argument.
7. videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=false text-overlay=false \
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
8. t. ! \
9. videoscale n-threads=8 !
12. hailooverlay ! \
Performs a draw process, based on the meta data of the buffers. this is a newer
api (comparing to using hailofilter for drawing).
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
Overview:
hailo_pose_estimation.sh demonstrates human pose estimation on one video file
source and verifies Hailo’s configuration. This is done by running a single-stream
pose estimation pipeline on top of GStreamer using the Hailo-8 device.
Options
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/raspberrypi/pose_estimation
./hailo_pose_estimation.sh
How it works
This section is optional and provides a drill-down into the implementation of the
pose_estimation app with a focus on explaining the GStreamer pipeline. This section
92/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
uses centerpose_regnetx_1.6gf_fpn as an example network so network input width,
height, and hef name are set accordingly.
gst-launch-1.0 \
filesrc location=$input_source name=src_0 ! qtdemux ! h264parse !
avdec_h264 ! \
videoscale n-threads=8 ! video/x-raw, pixel-aspect-ratio=1/1 !
videoconvert n-threads=8 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path device-id=$hailo_bus_id debug=False
is-active=true qos=false batch-size=1 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$postprocess_so qos=false debug=False
function-name=$network_name ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter so-path=$draw_so qos=false debug=False ! \
videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=$sync_pipeline text-overlay=false ${additonal_parameters}
Specifies the location of the video used, then decodes the data. Re-scale the
video dimensions to fit the input of the network, In this case it is rescaling the
video to 640x640 with the caps negotiation of hailonet. Converts to the required
format using 8 threads for acceleration.
Before sending the frames into the hailonet element, set a queue so no frames
are lost (Read more about queues here)
Each hailofilter performs a given post-process. In this case the first performs
the centerpose post-process and the second performs box and skeleton drawing.
5. videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=$sync_pipeline text-overlay=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
Overview:
The purpose of face_detection.sh is to demostrate face detection on one video file
source and to verify Hailo’s configuration. This is done by running a single-stream
face detection pipeline on top of GStreamer using the Hailo-8 device.
Options
/face_detection.sh
Supported Networks
'liteface' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/lightface_slim.yam
l
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/raspberrypi/face_detection/
./face_detection.sh
How it works
This section is optional and provides a drill-down into the implementation of the face
detection app with a focus on explaining the GStreamer pipeline. This setction uses
gst-launch-1.0 \
filesrc location=$input_source name=src_0 ! qtdemux ! h264parse !
avdec_h264 ! videoconvert n-threads=8 ! tee name=t hailomuxer
name=mux \
t. ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-
size-time=0 ! mux. \
t. ! videoscale n-threads=8 ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path device-id=$hailo_bus_id debug=False
is-active=true qos=false ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter2 function-name=$network_name so-path=$postprocess_so
qos=false ! mux. \
mux. ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-
size-time=0 ! \
hailooverlay ! queue leaky=no max-size-buffers=30 max-size-
bytes=0 max-size-time=0 ! \
videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=$sync_pipeline text-overlay=false ${additonal_parameters}
Specifying the location of the video used, then decode and convert to the
required format using 8 threads for acceleration.
2. tee name=t
3. hailomuxer name=mux
a branch of the tee, passing the original frame to the muxer without re-scale
Another branch of the tee that will perdorm the inference. Re-scale the video
dimensions to fit the input of the network. In this case it is rescaling the video to
320x240 with the caps negotiation of hailonet.
NOTE: qos must be disabled for hailonet since dropping frames may cause
these elements to run out of alignment.
Each hailofilter performs a given post-process. In this case the first performs
the face detection post-process. Enters the mux.
9. videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=$sync_pipeline text-overlay=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element
Classification
The purpose of classification.sh is to demostrate classification on one video file
source. This is done by running a single-stream object classification pipeline
on top of GStreamer using the Hailo-8 device.
Options
Supported Networks:
'resnet_v1_50' - https://github.com/hailo-
ai/hailo_model_zoo/blob/master/hailo_model_zoo/cfg/networks/resnet_v1_50.yaml
Run
cd $TAPPAS_WORKSPACE/apps/gstreamer/raspberrypi/classification
./classification.sh
How it works
This section is optional and provides a drill-down into the implementation of the
classification app with a focus on explaining the GStreamer pipeline. This section
gst-launch-1.0 \
filesrc location=$input_source ! qtdemux ! h264parse ! avdec_h264
! videoconvert n-threads=8 ! \
tee name=t hailomuxer name=hmux \
t. ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-
size-time=0 ! hmux. \
t. ! videoscale n-threads=8 ! video/x-raw, pixel-aspect-ratio=1/1
! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailonet hef-path=$hef_path device-id=$hailo_bus_id debug=False
is-active=true qos=false ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
hailofilter2 so-path=$postprocess_so qos=false ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! hmux. \
hmux. ! hailooverlay ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=false text-overlay=false ${additonal_parameters}
Specifies the location of the video used, then decode and convert to the required
format using 8 threads for acceleration.
2. tee name=t
Declare a tee that splits the pipeline into two branches in order to keep the
original resolution.
3. hailomuxer name=hmux
Declare a hailomuxer.
The first split of the tee. Re-scale the video dimensions to fit the input of the
network. In this case it is rescaling the video to 112X112 with the caps
negotiation of hailonet. hailonet Extracts the needed resolution from the HEF
file during the caps negotiation, and makes sure that the needed resolution is
passed from previous elements. Before sending the frames into hailonet
element set a queue so no frames are lost (Read more about queues here)
8. hmux. ! hailooverlay ! \
queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-
time=0 ! \
9. videoconvert n-threads=8 ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display
sync=false text-overlay=false ${additonal_parameters}
Apply the final convert to let GStreamer utilize the format required by the
fpsdisplaysink element.
Native Applications
./build/detection_app
Example details
The example demonstrates the use of libhailort's C API, all functions calls are based on
the header provided in hailort/include/hailo/hailort.h. The input images are
located in input_images/ directory and the output images are written to
output_images/ directory. The application works on bitmap images with the following
properties:
Code structure
main function: The main function gets the input images and passes them to
infer function.
infer function: First, the function is preparing the device for inference:
Configure the device from an HEF The next step is to create an hailo_hef
object, and use it to configure the device for inference. Then, init an
hailo_configure_params_t object with default values, configure the
device and receive an hailo_configured_network_group object.
Build VStreams
One thread for writing the data to the device using the write_all function.
Used APIs: hailo_vstream_write_raw_buffer
Three threads for receiving data from the device using the read_all
function. Used APIs: hailo_vstream_read_raw_buffer
One thread for post-processing the data received from the device, drawing
the detected objects and writing the output files to the output directory.
FeatureData is an object used for gathering the information needed for the
post-processing and is created for each feature in the model.
HailoNet
Overview
Hailonet is a bin element which contains a hailosend element, a hailorecv element
and a queue between them. The hailosend element is responsible for sending the
data received from the hailonet’s sink to the Hailo-8 device for inference. Inference
is done via the VStreams API. The hailorecv element will read the output buffers
from the device and attach them as metadata to the source frame that inferred them.
That is why the hailonet has only one source, even in cases where the HEF has more
than one output layer.
Parameters
Configuration and activation of the Hailo network is done when the pipeline is started.
Infers data using the Hailo-8 chip. The data is inferred according to the selected HEF
(hef property). Currently, only HEFs with one input layer are supported!
Selecting a specific PCIe device (when there are more than one) can be done with the
device-id property.
Networks switching can be done with the is-active property (this can’t be done in a
CLI application since this property needs to be turned on and off during runtime).
For multi-context networks the batch-size property can be used to specify the batch
size.
Using the inputs and outputs properties, specific VStreams can selected for input and
output inference.
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstBin
+----GstHailoNet
Pad Templates:
SINK template: 'sink'
Availability: Always
Capabilities:
ANY
Pads:
SINK: 'sink'
Pad Template: 'sink'
SRC: 'src'
Pad Template: 'src'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "hailonet0"
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
async-handling : The bin will handle Asynchronous state
changes
flags: readable, writable
Boolean. Default: false
message-forward : Forwards all children messages
flags: readable, writable
Boolean. Default: false
debug : Should print debug information
flags: readable, writable
Boolean. Default: false
device-id : Device ID ([<domain>]:<bus>:<device>.<func>,
same as in lspci command)
flags: readable, writable
String. Default: null
hef-path : Location of the HEF file to read
flags: readable, writable
String. Default: null
net-name : Configure and run this specific network. If
not passed, configure and run the default network - ONLY if there is
one network in the HEF!
flags: readable, writable
String. Default: null
batch-size : How many frame to send in one batch
flags: readable, writable
Unsigned Integer. Range: 1 - 16 Default: 1
105/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
outputs-min-pool-size: The minimum amount of buffers to allocate
for each output layer
flags: readable, writable
Unsigned Integer. Range: 0 - 4294967295
Default: 16
outputs-max-pool-size: The maximum amount of buffers to allocate
for each output layer or 0 for unlimited
flags: readable, writable
Unsigned Integer. Range: 0 - 4294967295
Default: 0
is-active : Controls whether this element should be
active. By default, the hailonet element will not be active unless
there is only one hailonet in the pipeline
flags: readable, writable
Boolean. Default: false
Children:
hailorecv
hailo_infer_q_0
hailosend
HailoFilter
Overview
Hailofilter is an element which enables the user to apply a postprocess or drawing
operation to a frame and its tensors. It provides an entry point for a compiled .so file
that the user writes, inside of which they will have access to the original image frame,
the tensors output by the network for that frame, and any metadata attached. At first
the hailofilter will read the buffer from the sink pad, then apply the filter defined in the
provided .so, until finally sending the filtered buffer along the source pad to continue
down the pipeline.
Parameters
The most important parameter here is the so-path. Here you provide the path to your
compiled .so (shared object file) that applies your wanted filter.
By default, the hailofilter will call on a filter() function within the .so as the entry point.
If your .so has multiple entry points, for example in the case of slightly different
network flavors, then you can chose which specific filter function to apply via the
function-name parameter.
As a member of the GstVideoFilter hierarchy, the hailofilter element supports qos
(Quality of Service). Although qos typically tries to garuantee some level of
performance, it can lead to frames dropping. For this reason it is advised to always
set qos=false to avoid either tensors being dropped or not drawn.
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstBaseTransform
+----GstVideoFilter
+----GstHailoFilter
Pad Templates:
SINK template: 'sink'
Availability: Always
Capabilities:
video/x-raw
format: { (string)RGB, (string)YUY2 }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]
Pads:
SINK: 'sink'
Pad Template: 'sink'
SRC: 'src'
Pad Template: 'src'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "hailofilter0"
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
qos : Handle Quality-of-Service events
flags: readable, writable
Boolean. Default: true
debug : debug
flags: readable, writable, controllable
Boolean. Default: false
so-path : Location of the so file to load
flags: readable, writable, changeable only in
NULL or READY state
String. Default: null
function-name : function-name
flags: readable, writable, changeable only in
NULL or READY state
String. Default: "filter"
HailoFilter2
Overview
Hailofilter2 is an element which enables the user to apply a postprocess operation on
hailonet's output tensors. It provides an entry point for a compiled .so file that the user
writes, inside of which they will have access to the original image frame, the tensors
output by the network for that frame, and any metadata attached. At first the
hailofilter2 will read the buffer from the sink pad, then apply the filter defined in the
provided .so, until finally sending the filtered buffer along the source pad to continue
down the pipeline.
Parameters
The most important parameter here is the so-path. Here you provide the path to your
compiled .so that applies your wanted filter.
By default, the hailofilter2 will call on a filter() function within the .so as the entry
point. If your .so has multiple entry points, for example in the case of slightly different
network flavors, then you can chose which specific filter function to apply via the
function-name parameter.
As a member of the GstVideoFilter hierarchy, the hailofilter2 element supports qos
(Quality of Service). Although qos typically tries to garuantee some level of
performance, it can lead to frames dropping. For this reason it is advised to always
set qos=false to avoid either tensors being dropped or not drawn.
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstBaseTransform
+----GstHailoFilter2
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
ANY
Pads:
SINK: 'sink'
Pad Template: 'sink'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "hailofilter2-0"
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
qos : Handle Quality-of-Service events
flags: readable, writable
Boolean. Default: false
so-path : Location of the so file to load
flags: readable, writable, changeable only in
NULL or READY state
String. Default: null
function-name : function-name
flags: readable, writable, changeable only in
NULL or READY state
String. Default: "filter"
HailoOverlay
Overview
HailoOverlay is a drawing element that can draw postprocessed results on an incoming
video frame. This element supports the following results:
Detection - Draws the rectengle over the frame, with the label and confidence
(rounded).
Classification - Draws a classification over the frame, at the top left corner of the
frame.
Landmarks - Draws a set of points on the given frame at the wanted
coordintates.
Tiles - Can draw tiles as a thin rectengle.
Parameters
As a member of the GstBaseTransform hierarchy, the hailooverlay element supports
qos (Quality of Service). Although qos typically tries to garuantee some level of
performance, it can lead to frames dropping. For this reason it is advised to always
set qos=false to avoid either tensors being dropped or not drawn.
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstBaseTransform
+----GstHailoOverlay
Pad Templates:
SINK template: 'sink'
Availability: Always
Capabilities:
video/x-raw
format: { (string)RGB }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]
Pads:
SINK: 'sink'
Pad Template: 'sink'
SRC: 'src'
Pad Template: 'src'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "hailooverlay0"
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
qos : Handle Quality-of-Service events
flags: readable, writable
Boolean. Default: false
HailoDeviceStats
HailoDeviceStats
Overview
Parameters
Hierarchy
Overview
Hailodevicestats is an element that samples power and temperature. It doesn't have
any pads, it just has to be part of the pipeline. An example for using this element could
be found under the detection / multistream_multidevice app.
Parameters
Determine the time period between samples with the interval property.
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstHailoDeviceStats
Pad Templates:
none
Pads:
none
Properties:
HailoMuxer
Overview
HailoMuxer is an element designed for our new multi-device application. It muxes 2
similar streams into 1 stream, holding both stream's metadata. It has 2 src elements
and 1 sink, and whenever there are buffers on both src pads, it takes only 1 of the
buffers and passes it on, with both buffer's metadata.
Parameters
There are no unique properties to hailomuxer. The only parameters are the baseclass
parameters, which are 'name' and 'parent'.
Example
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstHailoMuxer
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
ANY
Pads:
SRC: 'src'
Pad Template: 'src'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "hailomuxer0"
115/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
HailoPython
HailoPython
Overview
Parameters
Hierarchy
Overview
HailoPython is an element which enables the user to apply processing operations to an
image via python. It provides an entry point for a python module that the user writes,
inside of which they will have access to the Hailo raw-output (output tensors) and
postprocessed-outputs (detections, classifications etc..) as well as the gstreamer
buffer. The python function will be called for each buffer going through the hailopython
element.
Parameters
The two paramaters that define the function to call are module and function for the
module path and function name respectively. In addition, as a member of the
GstVideoFilter hierarchy, the hailofilter element supports qos (Quality of Service).
Although qos typically tries to garuantee some level of performance, it can lead to
frames dropping. For this reason it is advised to always set qos=false to avoid
either tensors being dropped or not drawn.
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstBaseTransform
+----GstVideoFilter
+----GstHailoPython
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
video/x-raw
format: { (string)RGB, (string)YUY2 }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]
Pads:
SINK: 'sink'
Pad Template: 'sink'
SRC: 'src'
Pad Template: 'src'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "hailopython0"
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
qos : Handle Quality-of-Service events
flags: readable, writable
Boolean. Default: true
module : Python module name
flags: readable, writable
String. Default:
"/local/workspace/tappas/processor.py"
function : Python function name
flags: readable, writable
String. Default: "run"
HailoCropper
Overview
HailoCropper is an element providing cropping functionality, designed for application
with cascading networks, meaning doing one task based on a previous task. It has 1
sink and 2 sources. HailoCropper receives a frame on its sink pad, then invokes it's
prepare_crops method that returns the vector of crop reigions of interest (crop_roi),
For each crop_roi it creats a cropped image (representing it's x, y, width, height in the
full frame). The cropped images are then sent to the second src. From the first src we
push the original frame that the detections were cropped from.
Derived classes can override the default prepare_crops behaviour and decide where
to crop and how many times. hailotilecropper element does this exact thing when
splitting the frame into tiles by rows and columns.
Example
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstHailoCropper
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
ANY
Pads:
SINK: 'sink'
Pad Template: 'sink'
SRC: 'src_0'
Pad Template: 'src'
SRC: 'src_1'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "hailocropper0"
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
internal-offset : Whether to use Gstreamer offset of internal
offset.
flags: readable, writable, controllable
Boolean. Default: false
HailoTileCropper
Overview
HailoTileCropper is a derived element of hailoCropper and it is used in the Tiling app.
It overrides the default prepare_crops behaviour to return a vector of tile reigions of
intrest, and allows splitting the incoming frame into tiles by rows and columns. Each
tile stores their x, y, width, and height (with overlap between tiles included) in the full
frame. Just like the base HailoCropper, the full original frame is sent to the first src pad
while all the cropped images are sent to the second.
hailoaggregator wiil aggregate the cropped tiles and stitch them back to the original
resolution.
Parameters
tiles-along-x-axis : Number of tiles along x axis (columns) - default 2
tiles-along-y-axis : Number of tiles along x axis (rows) - default 2
overlap-x-axis : Overlap in percentage between tiles along x axis (columns) -
default 0
overlap-y-axis : Overlap in percentage between tiles along y axis (rows) - default
0
tiling-mode : Tiling mode (0 - single-scale, 1 - multi-scale) - default 0
scale-level : Scales (layers of tiles) in addition to the main layer 1: [(1 X 1)] 2: [(1
X 1), (2 X 2)] 3: [(1 X 1), (2 X 2), (3 X 3)]] - default 2
Example
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstHailoBaseCropper
+----GstHailoTileCropper
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
video/x-raw
format: { (string)RGB, (string)YUY2 }
Pads:
SINK: 'sink'
Pad Template: 'sink'
SRC: 'src_0'
Pad Template: 'src'
SRC: 'src_1'
Pad Template: 'src'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "hailotilecropper0"
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
internal-offset : Whether to use Gstreamer offset of internal
offset.
NOTE: If using file sources, Gstreamer does
not generate offsets for buffers,
so this property should be set to true in
such cases.
flags: readable, writable, controllable
Boolean. Default: false
tiles-along-x-axis : Number of tiles along x axis (columns)
flags: readable, writable, changeable only in
NULL or READY state
Unsigned Integer. Range: 1 - 20 Default: 2
tiles-along-y-axis : Number of tiles along x axis (rows)
flags: readable, writable, changeable only in
NULL or READY state
Unsigned Integer. Range: 1 - 20 Default: 2
overlap-x-axis : Overlap in percentage between tiles along x
axis (columns)
flags: readable, writable, changeable only in
NULL or READY state
Float. Range: 0 -
1 Default: 0
overlap-y-axis : Overlap in percentage between tiles along y
axis (rows)
122/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
flags: readable, writable, changeable only in
NULL or READY state
Float. Range: 0 -
1 Default: 0
tiling-mode : Tiling mode
flags: readable, writable
Enum "GstHailoTileCropperTilingMode" Default:
0, "single-scale"
(0): single-scale - Single Scale
(1): multi-scale - Multi Scale
scale-level : 1: [(1 X 1)] 2: [(1 X 1), (2 X 2)] 3: [(1 X
1), (2 X 2), (3 X 3)]]
flags: readable, writable, changeable only in
NULL or READY state
Unsigned Integer. Range: 1 - 3 Default: 2
HailoAggregator
Overview
HailoAggregator is an element designed for applications with cascading networks or
cropping functionality, meaning doing one task based on a previous task. A
complement to the HailoCropper, the two elements work together to form versatile
apps. It has 2 sink pads and 1 source: the first sinkpad receives the original frame from
an upstream hailocropper, while the other receives cropped buffers from that
hailocropper. The HailoAggregator waits for all crops of a given orignal frame to arrive,
then sends the original buffer with the combined metadata of all collected crops.
HailoAggregator also performs a 'flattening' functionality on the detection metadata
when receiving each frame: detections are taken from the cropped frame, copied to
the main frame and re-scaled/moved to their corresponding location in the main frame
(x,y,width,height). As an example:
Parameters
There are no unique properties to hailoaggregator. The only parameters are the
baseclass parameters, which are 'name' and 'parent'.
Example
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
ANY
Pads:
SINK: 'sink_0'
Pad Template: 'sink'
SINK: 'sink_1'
Pad Template: 'sink'
SRC: 'src'
Pad Template: 'src'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "hailoaggregator0"
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
flatten-detections : perform a 'flattening' functionality on the
detection metadata
when receiving each frame.
flags: readable, writable, changeable only in
NULL or READY state
Boolean. Default: true
HailoTileAggregator
Overview
HailoTileAggregator is a derived element of hailoAggregator and it is used in the
Tiling app. A complement to the HailoTileCropper, the two elements work together to
form a versatile tiling apps.
Example
Hierarchy
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstHailoAggregator
+----GstHailoTileAggregator
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
ANY
Pads:
SINK: 'sink_0'
Pad Template: 'sink'
SINK: 'sink_1'
Pad Template: 'sink'
SRC: 'src'
Pad Template: 'src'
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "hailotileaggregator0"
parent : The parent of the object
flags: readable, writable
Object of type "GstObject"
flatten-detections : perform a 'flattening' functionality on the
detection metadata when receiving each frame
flags: readable, writable, changeable only in
Installation
Using Dockers
Install Docker
The section below would help you with the installation of Docker.
# Install curl
sudo apt-get install -y curl
# Add your user (who has root privileges) to the Docker group
sudo usermod -aG docker $USER
Note: Consider reading Running out of disk space if your system is space limited
The script would load the docker image, and start a new container. The script might
take a couple of minutes, and after that, you are ready to go.
./run_tappas_docker.sh --resume
NOTE: The reason that you want to use the --resume flag is that the container
already exists, so only attaching to the container is required.
./run_tappas_docker.sh [options]
Options:
--help Show this help
--tappas-image Path to tappas image
--resume Resume an old container
--container-name Start a container with a specific name,
defaults to hailo_tappas_container
Use-cases
For building a new container with the default name:
./run_tappas_docker.sh --resume
for example:
Steps
Enter the TAPPAS release directory:
Note: This version runs and tested with HailoRT version 4.5.0.
├── build_docker.sh
├── core
├── docker
├── docs
├── downloader
├── manual-install.md
├── README.md
├── release --> copied `HailoRT` release
├── resources
├── tools
├── apps
├────── gstreamer
├────────── x86
├────────── arm
├────── native
After that
Details
This section describes Hailo-Docker files hierarchy and purpose.
Lets take a look inside the docker folder, you can see: Dockerfile.base,
Dockerfile.gstreamer, run_docker.sh, scripts
run_docker.sh - An easy-to-use run script that handles all the arguments required by
the docker.
$ ./run_docker.sh --help
Run Hailo Docker:
The default mode is trying to create a new container
Options:
--help Show this help
--resume Resume an old container
--resume-command Resume command (used only when --resume flag
is used)
--override Start a new container, if exists already,
delete the previous one
If no flags specified, the script would try to create a new container (and could
potently fail if one already exists)
--override - When used the script would create a new container and delete the
previous one if exists
--resume - The script would try to resume the last container created
Sometimes you will prefer to change the sources inside the docker for development or
debug purposes and compile them inside the docker. Running this script from
$TAPPAS_WORKSPACE/scripts directory will build and deploy the sources.
Troubleshooting
Hailo containers are taking to much space
Creating new docker containers with --override does not assure that the directory of
cached images and containers is cleaned. to prevent your system to ran out of
memory and clean /var/lib/docker run docker system prune from time to time.
ExecStart=/usr/bin/dockerd -H fd://
Edit the line by putting a -g and the new desired location of your Docker directory.
When you’re done making this change, you can save and exit the file.
If you haven’t already, create the new directory where you plan to move your Docker
files to.
Next, reload the systemd configuration for Docker, since we made changes earlier.
Then, we can start Docker.
Just to make sure that it worked, run the ps command to make sure that the Docker
service is utilizing the new directory location.
Manual Install
A guide about how to install our required components manually.
cd downloader
pip install -r requirements.txt
python main.py
NOTE: This could take up to a couple of minutes NOTE: python 3.6 or above is
required
Hailo install
First you would need to install Hailo's platform, follow our install guide for that. After
Hailo is installed: And then Make sure that Hailo works
GStreamer install
Install the required packages from apt
add-apt-repository ppa:oibaf/graphics-drivers
apt-get update
Then verify that GStreamer is installed in the right version by using this follow
command:
RTSP
If you are planning to use RTSP source, a patch to fix an issue in RTSP plugin is required
within gst-plugins-good and therefore you can't install gst-plugins-good directly
from apt. If you have no plans to use RTSP source just run:
Compile gst-plugins-good
This section provided above would clone and apply the patch, you can verify that the
patch applied successfully by running git status and verify that gstrtspsrc.c is
modified.
modified: gst/rtsp/gstrtspsrc.c
no changes added to commit (use "git add" and/or "git commit -a")
You can verify that the install works by running the follow command for example:
2 features:
+-- 2 elements
Install Opencv
Hailo GStreamer plugins requires opencv in version (4.5.2). You can get the required
modules pre compiled from our sources and copy them to your file system with:
2. Download the latest opencv release source code via git or zip file (4.5.2 as for
1.6.2021):
Or
4. Build and install using cmake with flags (each flag should start with -D).
Hailo plugins
Copy Hailo GStreamer plugins:
cp <base_dir>/hailo/x86/gstreamer/libgsthailometa.so /usr/lib/x86_64-
linux-gnu/gstreamer-1.0/ && \
cp <base_dir>/hailo/x86/gstreamer/libgsthailo.so /usr/lib/x86_64-
linux-gnu/gstreamer-1.0/ && \
cp <base_dir>/hailo/x86/gstreamer/libhrt.so /usr/lib/x86_64-linux-
gnu/
And that's it, you are ready to go. Check our Getting started section.
Hardware requirements
Hardware supported by i965 driver or iHD, such as
Intel Ironlake, Sandybridge, Ivybridge, Haswell, Broadwell,
Skylake, etc. (HD Graphics)
Intel BayTrail, Braswell
Intel Poulsbo (US15W)
Intel Medfield or Cedar Trail
Hardware supported by AMD Radeonsi driver, such as the list below
AMD Carrizo, Bristol Ridge, Raven Ridge, Picasso, Renoir
AMD Tonga, Fiji, Polaris XX, Vega XX, Navi 1X
Other hardware supported by Mesa VA gallium state-tracker
(taken from https://github.com/GStreamer/gstreamer-vaapi/blob/master/README)
lshw -c video
Or use
Make sure that VGA compatible device with Intel drivers present.
ls /dev/dri
2. Install Drivers
add-apt-repository ppa:oibaf/graphics-drivers
apt update
apt dist-upgrade
reboot
Use vainfo (diagnostic tool for VA-API) to check that everything is loaded correctly
without any warning or mistakes.
vainfo
4. Install gstreamer-vaapi
Yocto
This section will guide through the integration of Hailo's Yocto layer's into your own
Yocto environment.
Two layers are provided by Hailo, the first one is meta-hailo which is packed within the
HailoRT release and the second one is meta-hailo-tappas which is packed within the
TAPPAS relaese.
The layers were built and validated with the following Yocto releases:
Extraction
HailoRT
From the HailoRT release, untar platform.tar.gz without installing HailoRT locally. In
this case the Yocto files are located under platform/. (you can read more in the
HailoRT documentation)
Tappas
From the TAPPAS release, untar tappas_VERSION_linux_installer.zip, the Yocto files
are located under yocto/. the layer uses the unpacked release directory as an external
source. In order for the build to work you will have to set a TAPPAS_EXTERNALSRC
variable in your conf/local.conf file to point to the root directory of the TAPPAS
release you have extracted:
Recipes
libgsthailo
Hailo's GStreamer plugin for running inference on the hailo8 chip. Depends on
libhailort and GStreamer, the source files located under the HailoRT release.
platform/hailort/gstreamer.
libgsthailotools
Hailo's TAPPAS gstreamer elements and post-processes. Depends on libgsthailo and
GStreamer. the source files located in the TAPPAS release under
core/hailo/gstreamer. The recipe compiles with meson and copies the
libgsthailotools.so file to /usr/lib/gstreamer-1.0 and the post processes to
/usr/lib/hailo-post-processes on the target device's root file system.
Overview
Hailo recommended method at the moment for cross-compilation is using Yocto SDK
(aka Toolchain). We provide wrapper scripts whose only requirement is a Yocto
toolchain to make this as easy as possible.
Preparations
In order to cross-compile you need to run TAPPAS container on a X86 machine and
copy the Yocto toolchain to the container.
Toolchain
What is Toolchain?
A standard Toolchain consists of the following:
Libraries, Headers, and Symbols: The libraries, headers, and symbols are specific
to the image (i.e. they match the image).
Environment Setup Script: This *.sh file, once run, sets up the cross-development
environment by defining variables and preparing for Toolchain use.
You can use the standard Toolchain to independently develop and test code that is
destined to run on some target machine.
Must Add
# GStreamer plugins
IMAGE_INSTALL_append += " \
imx-gst1.0-plugin \
gstreamer1.0-plugins-bad-videoparsersbad \
gstreamer1.0-plugins-good-video4linux2 \
gstreamer1.0-plugins-base \
"
Nice to add
cd <BUILD_DIR>/tmp/deploy/sdk
touch toolchain.tar.gz
tar -czf toolchain.tar.gz --exclude=toolchain.tar.gz .
docker cp toolchain.tar.gz
hailo_tappas_container:/local/workspace/tappas
Components
GstHailo
Compiling the gst-hailo component. This script, firstly unpack and installs the
toolchain (If not installed already), and only after that, cross-compiles.
Flags
$ ./cross_compile_gsthailo.py --help
usage: cross_compile_gsthailo.py [-h]
{aarch64,armv7l} {debug,release}
toolchain_tar_path
Cross-compile gst-hailo.
positional arguments:
{aarch64,armv7l} Arch to compile to
{debug,release} Build and compilation type
toolchain_tar_path Toolchain TAR path
optional arguments:
Example
An example for executing the script:
NOTE: In this example we assume that the toolchain is located under toolchain-
raw/hailo-dartmx8m-zeus-aarch64-toolchain.tar.gz
$ ls aarch64-gsthailo-build/
CMakeCache.txt CMakeFiles Makefile cmake_install.cmake
libgsthailo.so
$ file aarch64-gsthailo-build/libgsthailo.so
aarch64-gsthailo-build/libgsthailo.so: ELF 64-bit LSB shared object,
ARM aarch64, version 1 (SYSV), dynamically linked,
BuildID[sha1]=e55c1655c113e99bb649dbb03c15b844142503ee, with
debug_info, not stripped
GstHailoTools
This script cross-compiles gst-hailo-tools. This script, firstly unpack and installs the
toolchain (If not installed already), and only after that, cross-compiles.
Flags
$ ./cross_compile_gsthailotools.py --help
usage: cross_compile_gsthailotools.py [-h]
[--yocto-distribution
YOCTO_DISTRIBUTION]
{aarch64,armv7l}
{debug,release}
Cross-compile gst-hailo.
positional arguments:
{aarch64,armv7l} Arch to compile to
{debug,release} Build and compilation type
toolchain_tar_path Toolchain TAR path
optional arguments:
-h, --help show this help message and exit
--yocto-distribution YOCTO_DISTRIBUTION
The name of the Yocto distribution to use
Example
Run the compilation script
NOTE: In this example we assume that the toolchain is located under toolchain-
raw/hailo-dartmx8m-zeus-aarch64-toolchain.tar.gz
$ ls aarch64-gsthailotools-build/
build.ninja compile_commands.json config.h libs meson-info
meson-logs meson-private plugins
$ ls aarch64-gsthailotools-build/plugins/*.so
libgsthailotools.so
$ ls aarch64-gsthailotools-build/libs/*.so
libcenterpose_post.so libmobilenet_ssd_post.so
libclassification.so libsegmentation_draw.so
libdebug.so libyolo_post.so
libdetection_draw.so
Copy libgsthailo.so + libgsthailotools.so to the path found out above. Copy the
post-processes so files under libs to the embedded device under /usr/lib/hailo-post-
processes (create the directory if it does not exist)
Run gst-inspect-1.0 hailo and gst-inspect-1.0 hailotools and make sure that
no error raises
Further Reading
GStreamer Framework
GStreamer Principles
Object-oriented- All GStreamer Objects can be extended using the GObject
inheritance methods. All plugins are loaded dynamically and can be extended
and upgraded independently.
GStreamer adheres to GObject, the GLib 2.0 object model. A programmer familiar
with GLib 2.0 or GTK+ will be comfortable with GStreamer.
Allow binary-only plugins- Plugins are shared libraries that are loaded at runtime.
High performance
GStreamer Elements
Elements - have one specific function for processing/ generating / consuming
data. By chaining together several such elements, you create a pipeline that can
do a specific task.
Pads - are an element's input and output, where you can connect other
elements. A pad can be viewed as a “plug” or “port” on an element where links
may be made with other elements, and through which data can flow to or from
those elements. Data types are negotiated between pads using a process called
caps negotiation. Data types are described by GstCaps.
Bin - A bin is a container for a collection of elements. Since bins are subclasses of
elements themselves, you can mostly control a bin as if it were an element,
thereby abstracting away a lot of complexity for your application. A pipeline is a
top-level bin. It provides a bus for the application and manages the
synchronization for its children.
GstShark leverages GStreamer's tracing hooks and open-source and standard tracing
and plotting tools to simplify the process of understanding the bottlenecks in your
pipeline.
The profiling tool provides 3 general features that can be used to debug the pipeline:
Console printouts - At the most basic level, you should get printouts from the
traces about the different measurements made. If you know what you are looking
for, you may see it here at runtime.
Gst-plot - A suite of graph generating scripts are included in gst-shark that will
plot different graphs for each tracer metric enabled. This is a powerful tool to
visualize each metric that can be used for deeper debugging.
Bash shortcuts
As part of our creation of the Docker image, we copy some convenient shortcuts to
GstShark:
vim ~/.bashrc
# run gst-plot
gst_plot_debug() {
cd <PATH TO GST-SHARK REPO FOLDER: gst-shark/scripts/graphics>
./gstshark-plot $GST_SHARK_LOCATION -p
cd -
}
Note that we added 4 functions: two sets, an unset, and a plot function. The set
functions enable gst-shark by setting environment variables, the chief of which is
GST_TRACERS. This enables the different trace hooks in the pipeline. The available
Using GstShark
Let’s say you have a gstreamer app you want to profile. Start by enabling gst-shark:
Then just run your app. You will start seeing all kinds of tracer prints, and when the
pipeline starts playing you should see the graphic plot load.
NOTE:: The graph will stay open as long as the pipeline runs. However if you
have GST_DEBUG_DUMP_DOT_DIR set then afterwards a .dot file will be saved.
Click this file to reopen the graph.
After you’ve run a gstreamer pipeline with tracers enabled, you can plot them using
gst-plot. gst-plot will open an Octave window which will runt he appropriate script to
plot each tracer. Depending on how much data you have to plot this can take a while:
Each graph inspects a different metric of the pipeline, it is recommended to read more
about what each one represents here:
CPU Usage
Processing Time
InterLatency
Schedule Time
Buffer
Bitrate
Framerate
Queue Level
Graphic
gst-top-1.0 at the start of the pipeline will analyze and profile the run. (gst-top-
1.0 gst-launch-1.0 ! audiotestsrc ! autovideosink)
Overview
If you want to add a network to the Tappas that is not already supported, then you will
likely need to implement a new postprocess and drawing filter. Fortunately with the use
of the hailofilter, you don't need to create any new gstreamer elements, just provide
the shared object file (.so) that applies your filter!
In this guide we will go over how to create such an so and what mechanisms/structures
are available to you as you create your postprocess.
Getting Started
#include "hailo_frame.hpp"
#include "hailo_frame.hpp"
G_BEGIN_DECLS
void filter(HailoFramePtr hailo_frame);
G_END_DECLS
Yes really, that's it! The hailofilter element does not expect much, just that the
above filter function be provided. We will discuss adding multiple filters in the same
.so later. Note that the filter function takes a HailoFramePtr as a parameter; this
will provide you with the hailo_frame of each passing image.
Implementing filter()
Let's start implementing the actual filter so that you can see how to access and work
with tensors. Start by creating a new file called my_post.cpp. Open it and include the
following:
#include <gst/gst.h>
#include <iostream>
#include "my_post.hpp"
#include "hailo_detection.hpp"
The <gst/gst.h> include provides the gstreamer framework api, the <iostream> will
allow us to print to the console, the "my_post.hpp" includes the header file we just
wrote, and the "hailo_detection.hpp" will provide access to the DetectionObject
struct which represents any detection object that we infer. You can find the source for
DetectionObject in plugins/metadata/hailo_detection.hpp, later we will use it to
attach detected objects to the frame.
For now add the follwoing implmentation for filter() so that we have a working
postprocess we can test:
That should be enough to try compiling and running a pipeline! Next we will see how to
add our postprocess to the Meson project so that it compiles.
################################################
# MY POST SOURCES
################################################
my_post_sources = [
'postprocesses/my_post.cpp',
]
my_post_lib = shared_library('my_post',
my_post_sources,
cpp_args : hailo_lib_args,
link_args: hailo_ld_args,
include_directories: project_inc,
dependencies : plugin_deps + hailo_deps,
gnu_symbol_visibility : 'default',
)
This should give meson all the information it needs to compile our postprocess. In
short, we are providing paths to cpp compilers, linked libraries, included directories,
and dependencies. Where are all these path variables coming from? Great question:
from the parent meson project, you can read that meson file to see what packages and
directories are available at core/hailo/gstreamer/meson.build.
./docker/scripts/install_hailo_gstreamer.sh --skip-hailort
If all goes well you should see some happy green YES, and our .so should appear in
apps/gstreamer/x86/lib/!
See in the above pipeline that we gave the hailofilter the path to libmy_post.so in
the so-path property. So now every time a buffer is received in that hailofilter's
sink pad, it calls the filter() function in libmy_post.so. The resulting app should
just show the original video while our chosen text "My first postprocess!" prints in
the console:
Filter Basics
The hailo_frame has two ways of providing the output tensors of a network: via the
get_tensors() and get_tensors_by_name() functions. The first (which we used here)
returns an std::vector of HailoTensorPtr objects. These are an std::shared_ptr to
a HailoTensor: a class that represents an output tensor of a network. HailoTensor
holds all kinds of important tensor metadata besides the data itself; such as the width,
height, number of channels, and even quantization parameters. You can see the full
implementation for this class at plugins/metadata/hailo_tensor.hpp.
get_tensors_by_name() also returns a HailoTensorPtr for each output layer, but this
time as an std::map that pairs the output layer names with their corresponding
HailoTensorPtr. This can be convenient if you want to perform operations on specific
layers whose names you know in advanced.
So now we have a vector of HailoTensorPtr objects, lets get some information out of
one, add the following lines to our filter() function:
Recompile with the same script we used earlier. Run a test pipeline, and this time see
actual parameters of the tensor printed out:
gst-launch-1.0 filesrc
location=$TAPPAS_WORKSPACE/apps/gstreamer/x86/detection/detection.mp4
name=src_0 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-
ratio=1/1 ! videoconvert ! queue ! hailonet hef-
path=$TAPPAS_WORKSPACE/apps/gstreamer/x86/detection/yolov5m.hef
debug=False is-active=true qos=false batch-size=8 ! queue leaky=no
max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter
so-path=$TAPPAS_WORKSPACE/apps/gstreamer/x86/libs/libmy_post.so
qos=false debug=False ! videoconvert ! fpsdisplaysink video-
sink=ximagesink name=hailo_display sync=true text-overlay=false
With a HailoTensorin hand, you have everything you need to perform your
postprocess operations. You can access the actual tensor values from the HailoTensor
with:
Keep in mind that at this point the data is of type uint8_t, You will have to dequantize
the tensor to a float if you want the full precision. Luckily the quantization parameters
(scale and zero point) are also accesible through the HailoTensor.
std::vector<DetectionObject> demo_detection_objects()
{
std::vector<DetectionObject> objects; // The detection objects we
will eventually return
DetectionObject first_detection = DetectionObject(0.2, 0.2, 0.2,
0.2, 0.99, 1);
DetectionObject second_detection = DetectionObject(0.6, 0.6, 0.2,
0.2, 0.89, 1);
objects.push_back(first_detection);
objects.push_back(second_detection);
return std::move(objects);
}
In this function we are creating two instances of a DetectionObject and pushing them
into a vector that we return. Note that when creating a DetectionObject, we give a
NOTE: It is assumed that the xmin, ymin, width, and height given are a
percentage of the image size (meaning, if the box is half as wide as the width of
the image, then width=0.5). This protects the pipeline's ability to resize buffers
without comprimising the correct relative size of the detection boxes.
Looking back at the demo function we just introduced, we are adding two instances of
DetectionObject: first_detection and second_detection. According to the
parameters we saw, first_detection has an xmin 20% along the x axis, and a ymin
20% down the y axis. The width and height are also 20% of the image. The last two
parameters, confidence and class_id, show that this instance has a 99% confidence
for class_id 1. What label does class_id 1 imply? That depends on your dataset!
Notice that the last parameter of DetectionObject is a dataset_id with default 0.
The provided detection drawer, which we will look at later, uses the dataset_id along
with the class_id to look up the proper label within different datasets. Right now a
few datasets are provided out of the box in the Tappas, you can find them in the file
libs/postprocess/common/labels.hpp. The default dataset is COCO, so a class_id of 1
is a person.
Now that we have a couple of DetectionObject in hand, lets add them to the original
hailo_frame. There is a helper function we need in the
libs/postprocess/common/common.hpp file, so include it into my_post.cpp now:
#include "common/common.hpp"
This file will no doubt have other features you will find useful, so it is recommended to
keep the file handy.
With the include in place, let's add the following function call to the end of the
filter() function:
This function takes a hailo_frame and a DetectionObject vector, then adds each
DetectionObject to the hailo_frame. Now that our detections have been added to
the hailo_frame and the postprocess is done, we can clear the tensors vector to
release the memory (add the command tensors.clear(); to the end of the filter()
function to do so). To recap, our whole my_post.cpp should look like this:
#include <gst/gst.h>
#include <iostream>
#include "my_post.hpp"
#include "hailo_detection.hpp"
#include "common/common.hpp"
161/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
std::vector<DetectionObject> demo_detection_objects()
{
std::vector<DetectionObject> objects; // The detection objects we
will eventually return
DetectionObject first_detection = DetectionObject(0.2, 0.2, 0.2,
0.2, 0.99, 1);
DetectionObject second_detection = DetectionObject(0.6, 0.6, 0.2,
0.2, 0.89, 1);
objects.push_back(first_detection);
objects.push_back(second_detection);
return std::move(objects);
}
Recompile again and run the test pipeline, if all goes well then you should see the
original video run with no problems! But we still don't see any detections? Don't worry,
they are attached to each buffer, however no filter is drawing them onto the image
itself. To see how our detection boxes can be drawn, read on to Next Steps: Drawing.
Next Steps
Drawing
At this point we have a working postprocess that attaches two detection boxes to each
passing buffer. But how do we get the GStreamer pipeline to draw those boxes onto the
image? With another hailofilter of course! Just as we were able to add a
hailofilter with an .so that added detection boxes, we can also add a second
hailofilter to the pipeline that draws those boxes onto the image.
The Tappas already comes with an .so that knows how to draw attached
DetectionObject instances: libdetection_draw.so. You can find the source for this
.so at libs/postprocesses/detection_draw.cpp, inside are good examples of not only
how to extract/draw DetectionObject instances from a hailo_frame, but also how to
extract landmarks and draw them.
gst-launch-1.0 filesrc
location=$TAPPAS_WORKSPACE/apps/gstreamer/x86/detection/detection.mp4
name=src_0 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-
ratio=1/1 ! videoconvert ! queue ! hailonet hef-
path=$TAPPAS_WORKSPACE/apps/gstreamer/x86/detection/yolov5m.hef
debug=False is-active=true qos=false batch-size=8 ! queue leaky=no
max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter
so-path=$TAPPAS_WORKSPACE/apps/gstreamer/x86/libs/libmy_post.so
qos=false debug=False ! queue leaky=no max-size-buffers=30 max-size-
bytes=0 max-size-time=0 ! hailofilter so-
path=$TAPPAS_WORKSPACE/apps/gstreamer/x86/libs/libdetection_draw.so
qos=false debug=False ! videoconvert ! fpsdisplaysink video-
sink=ximagesink name=hailo_display sync=true text-overlay=false
Run the expanded pipeline above to see the original video, but this time with the two
detection boxes we added!
As expected, both boxes are labeled as person, and each is shown with the assigned
confidence. Obviously, the two boxes don't move or match any object in the video;
this is because we hardcoded their values for the sake of this tutorial. It is up to you to
extract the correct numbers from the inferred tensor of your network, as you can see
among the postprocesses already implemented in the Tappas each network can be
different. We hope that this guide gives you a strong starting point on your
development journey, good luck!
#ifndef _HAILO_YOLO_POST_HPP_
#define _HAILO_YOLO_POST_HPP_
#include "hailo_frame.hpp"
G_BEGIN_DECLS
void filter(HailoFramePtr hailo_frame);
void yolov3(HailoFramePtr hailo_frame);
void yolov4(HailoFramePtr hailo_frame);
void yolov5(HailoFramePtr hailo_frame);
void yolov5_no_persons(HailoFramePtr hailo_frame);
G_END_DECLS
#endif
164/169 Confidential and Proprietary | Copyright © 2022– Hailo Technologies Ltd.
Hailo Tappas | User Guide
Any of the functions declared here can be given as a function-name property to the
hailofilter element. Condsider this pipeline for running the Yolov5 network:
gst-launch-1.0 filesrc
location=$TAPPAS_WORKSPACE/apps/gstreamer/x86/detection/detection.mp4
name=src_0 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-
ratio=1/1 ! videoconvert ! queue leaky=no max-size-buffers=30 max-
size-bytes=0 max-size-time=0 ! hailonet hef-
path=$TAPPAS_WORKSPACE/apps/gstreamer/x86/detection/yolov5m.hef
debug=False is-active=true qos=false batch-size=1 ! queue leaky=no
max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter
function-name=yolov5 so-
path=$TAPPAS_WORKSPACE/apps/gstreamer/x86/libs//libyolo_post.so
qos=false debug=False ! queue leaky=no max-size-buffers=30 max-size-
bytes=0 max-size-time=0 ! hailofilter so-
path=$TAPPAS_WORKSPACE/apps/gstreamer/x86/libs//libdetection_draw.so
qos=false debug=False ! videoconvert ! fpsdisplaysink video-
sink=ximagesink name=hailo_display sync=true text-overlay=false
Overview
If you want to add a network to the TAPPAS that is not already supported, then you will
likely need to implement a new postprocess. Fortunately with the use of the
hailopython, you don't need to create any new GStreamer elements, just provide a
Python module that applies your post-processing!
In this guide we will go over how to create such a python module and what
mechanisms/structures are available to you as you create your postprocess.
Getting Started
hailopython requires a module and a Python function.
Notice that np.array has a parameter that determines whether we copy the
memory or using the original buffer.
There are some other methods in HailoTensor, you are welcome to perform
dir(my_tensor) or help(my_tensor).
Adding results
After you process your net results and come up with post-processed results, you can
use them however you want. Here we will show you how to add them to the original
image in order to draw them later by hailooverlay element. In order to add post-
processed result to the original image - use the roi.add_object method. This method
adds a HailoObject object to our image. There are several types of objects that are
currently supported: hailo.HailoClassification - Classification of the image.
hailo.HailoDetection - Detection in the image. hailo.HailoLandmarks - Landmarks in the
image.
You can create one of these objects and then add it with the roi.add_object method.
Next Steps
Drawing
In order to draw your postprocessed results on the original image use the hailooverlay
element. It is already familiar with our HailoObject types and knows how to draw
classifications, detections, and landmarks onto the image.
gst-launch-1.0 filesrc
location=$TAPPAS_WORKSPACE/apps/gstreamer/x86/detection/detection.mp4
name=src_0 ! decodebin \
! videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert !
queue leaky=no max-size-buffers=30 \
max-size-bytes=0 max-size-time=0 ! hailonet hef-
path=$TAPPAS_WORKSPACE/apps/gstreamer/x86/detection/yolov5m.hef \
debug=False is-active=true qos=false batch-size=8 ! queue leaky=no
max-size-buffers=30 max-size-bytes=0 \
max-size-time=0 ! hailopython
module=$TAPPAS_WORKSPACE/apps/gstreamer/x86/detection/my_module.py
qos=false ! queue \
leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 !
hailooverlay qos=false ! videoconvert ! \
fpsdisplaysink video-sink=ximagesink name=hailo_display sync=true
text-overlay=false
This is the standard detection pipeline with a python module for post-processing.
import hailo
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst