SRT Deployment Guide
SRT Deployment Guide
Deployment Guide
HVS-ID-DG-SRT
www.haivision.com
Edition Notice
© 2015-2020 Haivision. All rights reserved.
This edition and the products it describes contain proprietary and confidential information. No part of
this content may be copied, photocopied, reproduced, translated or reduced to any electronic or
machine-readable format without prior written permission of Haivision. If this content is distributed with
software that includes an end-user agreement, this content and the software described in it, are
furnished under license and may be used or copied only in accordance with the terms of that license.
Except as permitted by any such license, no part of this content may be reproduced, stored in a retrieval
system, or transmitted, in any form or by any means, electronic, mechanical, recording, or otherwise,
without the prior written permission of Haivision Systems, Inc. Please note that the content is protected
under copyright law even if it is not distributed with software that includes an end-user license
agreement.
About Haivision
Founded in 2004, Haivision is now a market leader in enterprise video and video streaming technologies.
We help the world’s top organizations communicate, collaborate and educate. Recognized as one of the
most influential companies in video by Streaming Media and one of the fastest growing companies by
Deloitte’s Technology Fast 500, organizations big and small rely on Haivision solutions to deliver video.
Headquartered in Montreal, Canada, and Chicago, USA, we support our global customers with regional
offices located throughout the United States, Europe, Asia and South America.
Trademarks
The Haivision logo, Haivision, and certain other marks are trademarks of Haivision. CoolSign is a
registered trademark licensed to Haivision Systems, Inc. All other brand or product names identified in
this document are trademarks or registered trademarks of their respective companies or organizations.
Disclaimer
The information contained herein is subject to change without notice. Haivision assumes no responsibility
for any damages arising from the use of this content, including but not limited to, lost revenue, lost data,
claims by third parties, or other damages.
While every effort has been made to provide accurate and timely information regarding this product and
its use, Haivision Systems Inc. shall not be liable for errors or omissions contained herein or for incidental
or consequential damages in connection with the furnishing, performance, or use of this material.
Contents
Edition Notice 2
About Haivision .....................................................................................................................................................................2
Trademarks .............................................................................................................................................................................2
Disclaimer ................................................................................................................................................................................2
Contents 3
About This Document 5
Conventions ............................................................................................................................................................................5
Typographic Conventions and Elements ............................................................................................................... 5
Action Alerts...................................................................................................................................................................... 5
Obtaining Documentation................................................................................................................................................ 6
Getting Service Support ................................................................................................................................................... 6
Introduction to SRT 8
About the SRT Alliance ..................................................................................................................................................... 9
SRT Solutions 11
Point to Point & Interactive ............................................................................................................................................. 11
Contribution & Aggregation........................................................................................................................................... 12
Point to Multi-Point............................................................................................................................................................ 13
SRT Stream Flipping.......................................................................................................................................................... 13
SRT Version History & Compatibility 14
Basic SRT Concepts 16
Source & Destination......................................................................................................................................................... 16
SRT Connection Modes .................................................................................................................................................... 17
SRT Connection Mode in Haivision Products ...................................................................................................... 17
SRT Connection Mode Examples ................................................................................................................................ 25
SRT and Firewalls .............................................................................................................................................................. 27
Example 1: Simple SRT Stream................................................................................................................................. 28
Example 2: Complex SRT Stream............................................................................................................................ 29
Deployment Scenarios 30
Scenario 1: Streaming over a LAN or private WAN .............................................................................................30
Scenario 2: Internet streaming using Caller and Listener modes................................................................... 33
Firewall Notes ..................................................................................................................................................................... 39
Scenario 3: Internet streaming using Rendezvous mode .................................................................................. 39
Rendezvous Mode and Firewalls ................................................................................................................................. 42
Connection Tracking ....................................................................................................................................................42
Other Considerations ...................................................................................................................................................42
Italics Used for the introduction of new terminology, for words being used
in a different context, and for placeholder or variable text.
bold Used for strong emphasis and items that you click, such as buttons.
Monospaced Used for code examples, command names, options, responses, error
messages, and to indicate text that you enter.
Action Alerts
The following alerts are used to advise and counsel that special actions should be taken.
Tip
Indicates highlights, suggestions, or helpful hints.
Note
Indicates a note containing special instructions or information that may apply only in special cases.
Important
Indicates an emphasized note. It provides information that you should be particularly aware of in
order to complete a task and that should not be disregarded. This alert is typically used to prevent
loss of data.
Caution
Indicates a potentially hazardous situation which, if not avoided, may result in damage to data or
equipment. It may also be used to alert against unsafe practices.
Warning
Indicates a potentially hazardous situation that may result in physical harm to the user.
Obtaining Documentation
This document was generated from the Haivision InfoCenter. To ensure you are reading the most up-to-
date version of this content, access the documentation online at https://doc.haivision.com. You may
generate a PDF at any time of the current content. See the footer of the page for the date it was
generated.
The Secure Reliable Transport (SRT) Deployment Guide provides guidance on setting up and deploying
SRT technology, which optimizes streaming performance across unpredictable networks, such as the
Internet. For general information on creating and managing streams, please refer to the specific
documentation for your product.
Introduction to SRT
Secure Reliable Transport (SRT) is an open source transport technology that optimizes streaming
performance across unpredictable networks, such as the Internet.
SRT is applied to contribution and distribution endpoints as part of a video stream workflow to deliver
the best quality and lowest latency video at all times.
As audio/video packets are streamed from a source to a destination device, SRT detects and adapts to
the real-time network conditions between the two endpoints. SRT helps compensate for jitter and
bandwidth fluctuations due to congestion over noisy networks, such as the Internet. Its error recovery
mechanism minimizes the packet loss typical of Internet connections. And SRT supports AES encryption
for end-to-end security, keeping your streams safe from prying eyes.
SRT has its roots in the UDP-based Data Transfer (UDT) protocol. While UDT was designed for high
throughput file transmission over public networks, it does not do well with live video. SRT is a
significantly modified version that supports live video streaming.
Low latency video transmission across IP based networks typically takes the form of MPEG-TS unicast or
multicast streams using the UDP protocol. This solution is perfect for protected networks, where any
packet loss can be mitigated by enabling forward error correction (FEC). Achieving the same low latency
between sites in different cities, countries or even continents is more challenging. While it is possible with
satellite links or dedicated MPLS networks, these are expensive solutions. The use of cheaper public
internet connectivity, while less expensive, imposes significant bandwidth overhead to achieve the
necessary level of packet loss recovery.
Even though UDT was not designed for live streaming, its packet loss recovery mechanism provided an
interesting starting point. The original version of SRT included new packet retransmission functionality
that reacted immediately to packet loss to enable live streaming.
To achieve low latency streaming, SRT had to address timing issues. The characteristics of a stream from
a source network are completely changed by transmission over the public internet, which introduces
delays, jitter, and packet loss. This, in turn, leads to problems with decoding, as the audio and video
decoders do not receive packets at the expected times. The use of large buffers helps, but latency is
increased.
SRT includes a mechanism that recreates the signal characteristics on the receiver side, dramatically
reducing the need for buffering. This functionality is part of the SRT protocol itself, so once data comes
out of an SRT connection on the receiver side, the stream characteristics have been properly recovered.
Initially developed by Haivision Systems Inc., the SRT protocol was released as open source in April 2017
in partnership with Wowza Media Systems Inc. Open source SRT is distributed under MPL-2.0, which was
chosen because it strikes a balance between driving adoption for open source SRT, while encouraging
contributions to improve upon it by the community of adopters. Any third party is free to use the SRT
source in a larger work regardless of how that larger work is compiled. Should they make source code
changes, they would be obligated to make those changes available to the community.
In May 2017, Haivision and Wowza founded the SRT Alliance (www.srtalliance.org), a consortium
dedicated to the continued development and adoption of the protocol.
In March 2020, the SRT Protocol RFC was submitted to the IETF as a draft.
interoperability of video streaming solutions and fostering collaboration with industry leaders to achieve
lower latency internet video transport. The SRT Alliance is open to new members. For companies who
want to participate actively in growing the ecosystem of SRT in low latency video streaming workflows,
please contact us at [email protected].
SRT Solutions
SRT is a point-to-point, connection-oriented protocol. Each SRT stream is characterized by the transport
of multimedia data and control messages, with only one UDP connection used per SRT stream.
Nonetheless, it is possible to configure SRT solutions that encompass a variety of situations.
Topics Discussed
SRT can be easily provisioned for straightforward connections within and between facilities to achieve a
low latency, low cost video distribution.
It can be useful over noisy, unreliable local LANs. In large organizations with many users on the same
LAN, congestion and packet drops are common — video is very sensitive to this. Even inside the same
building on a controlled LAN, SRT can enhance the experience.
Some networks have VLANs with dedicated bandwidth (where routers prioritize traffic) that require a
deep knowledge of routers and switches. SRT removes the need for IT intervention to get video through
the network.
The low latency achievable with SRT is even suitable for interactive applications over the Internet.
Multiple SRT source and destination devices can be configured to feed high-demand video
infrastructures.
Multiple SRT streams can be aggregated and re-distributed via unicast and multicast into recording, IPTV
and digital signage services.
Point to Multi-Point
While SRT operates on point-to-point connections, Haivision's Media Gateway is designed to support
multiple such connections. A single incoming SRT stream can be redistributed via one or more Media
Gateways to multiple SRT destination devices.
SRT destination devices (such as Makito X Decoder, and Media Gateway) support "stream flipping" — the
ability to convert back and forth between an SRT stream and a standard MPEG Transport Stream (TS).
This is accomplished by de-encapsulating an incoming SRT stream and re-streaming it as a TS/UDP
stream (and vice versa), and is typically done to allow devices (on a local LAN) that do not support SRT
to have access to incoming SRT stream content.
For more information on the numerous ways in which SRT can be deployed across a wide array of
solutions, please refer to our web site:
https://www.haivision.com/industries/
https://www.srtalliance.org/members/
SRT is designed to be backwards compatible, so that new or upgraded products will be able to support
SRT interactions with older ones. Note, however, that newer versions of SRT may have features
unavailable on older ones.
The table below shows the availability of SRT on various versions of Haivision products (as of August
2020):
SRT 1.0. 1.0. 1.0. 1.0. 1.1. 1.1. 1.1.3 1.1. 1.1. 1.2. 1.3. 1.3. 1.3.2 1.3. 1.3. 1.4. 1.4.1
Version: 4 5 6 7 0 2 4 6 0 0 1 3 4 0
Andr
oid
3.1.0
SRT 1.0. 1.0. 1.0. 1.0. 1.1. 1.1. 1.1.3 1.1. 1.1. 1.2. 1.3. 1.3. 1.3.2 1.3. 1.3. 1.4. 1.4.1
Version: 4 5 6 7 0 2 4 6 0 0 1 3 4 0
Makito 1.0.0
X4
Decoder
Play HMP
2000/40 3.0
00 STB 3.1
3.2
Stand
alone
1.0.0
1.1.0
1.2.0
Note
It is important to understand that with an SRT stream, the source is the device that is sending the
content (audio and/or video data), while the destination is the device receiving the content.
Elsewhere, you may encounter references to an SRT sender and an SRT receiver, but to avoid
confusion in this document we will be using the terms source and destination.
In some cases, a device can act as both the source and the destination. For example, a Haivision Media
Gateway may act as a destination while receiving an SRT stream from a Makito X Encoder, and then
become a source device as it re-streams to a Makito X Decoder. 1
1. There is room for confusion inherent in discussions of SRT, where the terms “caller/listener”,
“sender/receiver”, and “source/destination” often appear. But “caller/listener” has no direct
correspondence with “source/destination” — each of the latter can be either of the former. And a
destination device can be any intermediate node in an SRT streaming chain, such as a firewall. You
can “call” an SRT source or an SRT destination.
Listener Sets a device to wait for a request to open To accept a point-to-point connection initiated
an SRT connection. The Listener device by a Caller:
only needs to know that it should listen
for SRT packets on a certain port.
1. on a source or destination device that is
behind a firewall over which you have
control and can open a port.
2. on a source or destination device that is
not behind a firewall, or exposed directly
to the Internet.
3. when you know that another device will
initiate the session.
Example: Set a Makito X Decoder to Listener
mode to accept an SRT connection from a
Makito X Encoder.
Rendezvous Allows two devices to negotiate an SRT To establish a point-to-point SRT connection
connection over a mutually agreed upon when one or both devices are behind firewalls.
port. Both source and destination must be Once certain settings are in place on the
in Rendezvous mode. firewall, SRT connections can be initiated
without further intervention by a network
administrator.
Related Topics:
• SRT Connection Mode in Haivision Products
• Kraken
• KB
• Makito X Decoder
• Makito X Encoder
• SRT/Media Gateway
• Media Platform
• Play for Mobile
• Play Pro for Mobile
Kraken
SRT Connection Mode menu for input streams in Kraken 3.3
KB
SRT Connection Mode menu for channels in Haivision KB 5.7.0
Makito X Decoder
SRT Connection Mode menu for input streams in Makito X Decoder 2.3
Makito X Encoder
SRT Connection Mode menu for output streams in Makito X Encoder 2.3
SRT/Media Gateway
SRT Connection Mode menu for routes in Haivision Gateway 3.4
Media Platform
SRT Connection Mode menu for sources in Haivision Media Platform 3.2
Note
Regardless of which device is in which mode, when the SRT handshake is completed both source
and destination continue to exchange control packets containing information about network
conditions, dropped packets, etc. Once communication is established, the notion of which device is
Caller and which is Listener becomes unimportant. What matters is the source/destination
relationship, which is decoupled from the caller/listener relationship.
In Figure 9 below, the roles are reversed — the SRT source device is in Listener mode, and the SRT
destination device is in Caller mode. The SRT destination (Caller) initiates the handshake by sending a
series of control packets in a UDP stream (1). When the SRT source (Listener) receives these control
packets, it responds by sending its own (2). Once the handshake has successfully completed, the SRT
source device begins adding media packets to the UDP stream (3).
In Figure 10 below, both the SRT source device and the SRT destination device are in Rendezvous mode.
Both devices send a series of control packets in a UDP stream (1). Once the handshake has successfully
completed, the SRT source device begins adding media packets to the UDP stream (2).
Example 1 illustrates a simple SRT stream scenario in which an SRT source device attempts to stream
over the Internet to an SRT destination device behind a firewall. Example 2 illustrates a slightly more
complex scenario in which an SRT source device behind a firewall attempts to stream over the Internet to
an SRT destination device that is also behind a firewall.
Note
The order in which a firewall performs Network Address
Translation and packet filtering will have an impact on how the
packet filtering rules are configured.
Deployment Scenarios
This section describes three common scenarios for deploying SRT:
• Scenario 1: Streaming over a private LAN and/or WAN, with no firewall
• Scenario 2: Streaming over a public network with both the source and destination behind a firewall
(Caller/Listener mode)
• Scenario 3: Streaming over a public network with both the source and destination behind a firewall
(Rendezvous mode)
In real-world applications, elements of these scenarios can be employed in various ways.
Note
Some of the example values provided in these scenarios (IP addresses, port numbers, etc.) are for
illustrative purposes only. For general information on creating and managing streams, please refer
to the specific documentation for your product(s), available from the Haivision InfoCenter at
https://doc.haivision.com/
Using the settings in the table below, create and start an output stream:
Address 198.168.2.20 The target address for the SRT stream, which is the IP
address of the Makito X Decoder.
Source Port 20000 The UDP source port for the SRT stream, which is the
unique port over which the Makito X Encoder will be
sending the SRT stream. If you do not specify this UDP
source port, an ephemeral source port will be assigned
(between 32768 and 61000).
Destination Port 30000 The port over which the Makito X Decoder will be
listening.
Mode Listener The Makito X Decoder will wait for the source device
to initiate the SRT session.
Destination Port 30000 This is the port on which the Makito X Decoder will be
listening.
Note
In the figure below, we are using the same port assignments for both Caller and its firewall to
simplify the scenario. The Caller could, in fact, be using any other port, as long as its firewall had
the appropriate mapping to allow return traffic back to the Makito X Encoder.
The Makito X Encoder and Decoder will handshake and establish an SRT session. The encoder will send
the video stream to the decoder, which will process the stream and return control packets that includes
network throughput, latency and other statistics. The encoder can use this information to adapt its
transmission (resend lost packets, adjust bit rate, etc.).
Note that when the SRT handshake is completed both source and destination continue to exchange
control packets. Once an SRT connection/session is established, the Caller or Listener designation
becomes unimportant. What matters is the source/destination relationship, or video flow, which is
decoupled from the caller/listener relationship.
Using the settings in the table below, create and start an output stream:
Address 198.51.100.20 The target address for the SRT stream, which is the public IP
address of Firewall B (at the destination).
Source Port 20000 The unique UDP source port for the SRT stream; you can
leave the default value (Auto-assign), in which case an
ephemeral port in the range of 32768 to 61000 is assigned,
or, if required by your organization's IT policies, enter a
specific (static) port number. If you use Auto-assign, then
Firewall A must be configured to map ANY source port from
its local side to a specific port on its public side so that return
traffic can be directed to the Makito X Encoder.
Destination Port 30000 The port over which the Makito X Decoder will be listening.
This is the publicly mapped port number for the SRT
destination device (i.e. the port that Firewall B opens for the
SRT session). This port must be known (it can't be "any").
Note
In the figure below, we are using the same port assignments (20000) for both Caller and its
firewall to simplify the scenario. The Caller could, in fact, be using any other port, as long as its
firewall had the appropriate mapping to allow return traffic back to the Makito X Encoder.
Using the settings in the table below, create an outbound NAT rule that allows bidirectional UDP traffic,
with a port forwarding entry for incoming traffic on the firewall's public IP address/port that forwards it
to the Makito X Encoder's IP address/port number:
Source Port 20000 In this case, we are using a static port assignment for
the source port, but if auto-assigned it can be
anything within the ephemeral port range (typically
32768 to 61000 on Linux devices).
Destination Port 30000 This is the port over which the Makito X Decoder will
be listening.
Outbound NAT Source Port 20000 Your firewall must support Outbound NAT Source
Port (which disables NAT port rewrite). Otherwise,
Rendezvous mode may be required (see Scenario
#3).
Using the settings in the table below, create an inbound NAT rule to enable forwarding of SRT traffic to
the Makito X Decoder's IP address/port number:
Source Port 20000 This must match the Outbound NAT Source Port on
the firewall at the source (Firewall A).
Destination Port 30000 This is the public (external) port of the firewall at the
destination (Firewall B), which in this example is also
the port over which the Makito X Decoder will be
listening (dstport).
Redirect Target IP 192.168.2.20 This is the address of the Makito X Decoder (the
internal destination IP).
Redirect Target Port 30000 This is the port over which the Makito X Decoder will
be listening (the internal destination port, or dstport).
Note
The Redirect Target IP and Port are the internal IP and port number that the destination device is
using. Destination IP and Port are the firewall’s external interface. The port numbers don’t have to
be the same. For clarity, it may be useful to always use the same port number, but this is up to your
Firewall Administrator to decide.
Using the settings in the table below, create a packet filtering rule to allow SRT packets to pass freely to
and from the Makito X Decoder's IP address/port number:
Source Port 20000 This must match the Outbound NAT Source Port on
the firewall at the source.
Destination IP 192.168.2.1 or 198.51.100.20 Depending on your firewall, the NAT rules may be
applied before or after the packet filtering rules. This
will impact the filtering rule definition. If the NAT is
applied before, you have to specify the Firewall B
internal IP address. If the NAT is applied after, you
have to specify the Firewall B public IP address.
Destination Port 30000 This is the port over which the Makito X Decoder will
be listening (dstport).
Using the settings in the table below, create an Input Stream on the Makito X Decoder (the SRT
destination device):
Mode Listener The Makito X Decoder will wait for the source device to
initiate the SRT session.
Destination Port 30000 This is the port on which the Makito X Decoder will be
listening (the port to which Firewall B will be forwarding SRT
packets). If you have a NAT translation rule on Firewall B, the
Destination Port is the port to which the rule will be
forwarding packets.
Note
In the figure below, we are using the same port assignments (20000) for both Caller and its
firewall to simplify the scenario. The Caller could, in fact, be using any other port, as long as its
firewall had the appropriate mapping to allow return traffic back to the Makito X Encoder.
Once all settings have been applied, the Makito X Encoder and Decoder will handshake and establish an
SRT session. The encoder will send the video stream to the decoder, which will process the stream and
return control packets that include network throughput, latency and other statistics. The encoder can use
this information to adapt its transmission (resend lost packets, adjust bit rate, etc.).
Note that when the SRT handshake is completed both source and destination continue to exchange
control packets. Once an SRT connection/session is established, the Caller or Listener designation
becomes unimportant. What matters is the source/destination relationship, or video flow, which is
decoupled from the caller/listener relationship.
Firewall Notes
• If the source device’s port is auto-assigned, the firewall at the source must have an outbound NAT
rule for [source port] set to “any”.
• If the source device’s port is specified, then the same value should be used in the outbound NAT
rule.
• If a destination firewall has a filtering rule that matches a source port with a source IP, you must
disable outbound port rewrite on the source firewall. Disabling this option allows the source firewall
to map any port from the SRT source device to a unique, predefined port after the NAT rules have
been applied.
• Depending on your firewall, the NAT rules may be applied before or after the packet filtering rules.
This will affect the filtering rule definition. If the NAT rules are applied before, you have to specify
the firewall’s internal IP address. If the NAT rules are applied after, you have to specify the firewall’s
public IP address.
Important
Point-to-point sessions through firewalls can be done in reverse, with the SRT source device in
Listener mode and the SRT destination device in Caller mode.
Using the settings in the table below, create and start an Output Stream on the Makito X Encoder (the
SRT source device):
Address 198.51.100.20 This is the target address for the SRT stream, which is
the public IP address of the firewall at the destination.
Source Port 20000 For Rendezvous mode, source and destination ports
are the same.
Destination Port 20000 This is the port over which the Makito X Decoder will
be listening.
Make sure that the firewalls between the source and destination devices have port rewriting turned off
(i.e. static port mapping must be allowed). In this scenario, for example, this is necessary to allow the
Makito X Encoder and Decoder to establish an SRT session over port 20000.
Source Port 20000 In Rendezvous mode, the source port must be the
same as the destination port.
Destination Port 20000 The port on which the Makito X Encoder will be
listening.
Connection Tracking
Stateful firewalls maintain a connection tracking table, which is dynamically built based on actual traffic
passing through the firewall.
In a connection tracking table a typical entry might consist of UDP traffic from a device with a source IP
and port number, which is converted by NAT, and then connected to the public IP of a firewall on that
destination port:
UDP 192.168.1.10:20000 → 203.0.113.10:20000 → 198.51.100.20:2000
If you have a call coming in from the other endpoint, then you would see the same entry in reverse in the
connection tracking table:
UDP 198.51.100.20:20000 → 203.0.113.10:20000 → 192.168.1.10:20000
Typically, an encoder would start an SRT session in Caller mode, with a decoder in Listener mode waiting
for control packets. In Rendezvous, both are sending control packets to initiate the connection. The
outgoing packets create an entry in the table. When the incoming packets arrive, they create a
complementary entry. This tricks the firewall into thinking that the inbound packets are the responses to
the outbound ones, and so it permits the packets to pass through for the duration of the streaming
session.
Rendezvous mode allows the source and destination devices to “punch out” holes from the inside of their
respective firewalls. The only conditions are that both devices must be in Rendezvous mode, both must
be using the same port number, and there must be an outbound entry set up on each firewall so that the
source port number is preserved (i.e. so that there is no need to create input entry rules).
Other Considerations
• It is important that the port number used is unique. Using a firewall’s option to accept any
(ephemeral) port may be problematic in a scenario where there are multiple encoders sending out
streams
• If the firewall at the source is set to allow the source device to use any (ephemeral) port, then
outbound port rewriting must be disabled for Rendezvous mode. The port assignment must be
static so the SRT participants can meet.
• If either the SRT source or destination device is behind a firewall that does not allow outgoing
traffic, you will need to configure the firewall to allow outgoing SRT/UDP traffic
• There are still occasions where it would be necessary (or better) to manually configure Caller and
Listener settings to traverse firewalls at each end. For example, in high security environments that
employ “egress filtering”, firewalls are often configured to prevent arbitrary outbound ports from
being assigned. An internal address cannot send to a random outbound port. In such cases, both
the source and destination devices would be registered at the firewall, which would block
everything else.
• Background
• Configuring an SRT Stream
• SRT Parameters
• Round Trip Time
• Latency
• Packet Loss Rate
• RTT Multiplier
• Bandwidth Overhead
• Encrypting SRT Streams
• Optimizing SRT Performance
• Statistics
• SRT
• Delays
• Bandwidth Used
• Graph Sample Rates
• SRT Logs
Background
Along with the standard parameters associated with any streaming output, there are other important
values you must specify for an SRT stream. To introduce you to these parameters, let’s take a look at a
hypothetical example of an SRT stream being sent to a destination device, and see what happens over
time.
The diagram below depicts a stream being sent over a channel of some kind, such as a LAN or Internet
connection, with a certain capacity. Packets being sent from a source are stored at the destination in a
buffer. Let’s say at some point there is a total link failure, and then, shortly after, the connection is re-
established. So, for a short period of time, the destination is receiving no data.
SRT deals with such situations in two ways. The first is that the destination relies on its buffer to maintain
the stream output at its end. The size of this buffer is determined by the SRT Latency setting. Once the
link is re-established, the source is able to resume sending packets, including re-sending the packets lost
during the link failure.
To handle this extra “burst” of packets, an SRT stream allows for a certain amount of overhead. This
Bandwidth Overhead is calculated such that, in a worst case scenario, the source can deliver the number
of packets “lost” during the link failure (area A) over a “burst time” (area B), where area B must be equal
to area A.
The maximum time period for which a burst of lost packets can be sustained without causing an artifact
is:
SRT Latency (ms) * Bandwidth Overhead (%) ÷ 100
Example:
200 ms * 20% ÷ 100 = 40 ms
• If iperf does not work or is not available, set up a test SRT stream and use the Max Bandwidth or
Path Max Bandwidth value from the Statistics page.
4. Using the table from section RTT Multiplier, find the RTT Multiplier and Bandwidth Overhead values
that correspond to your measured Packet Loss Rate.
5. Determine your SRT Latency value using the following formula:
SRT Latency = RTT Multiplier * RTT
• If RTT < 20, use the Minimum SRT Latency value in the table mentioned above.
6. Determine the stream bitrate.
• The stream bitrate is the sum of the video, audio and metadata essence bitrates, plus an SRT
protocol overhead. It has to respect the following constraint:
Channel Capacity > SRT Stream Bandwidth * (100 + Bandwidth Overhead) ÷ 100
If this is not respected, then the video/audio/metadata bitrate on the encoder must be reduced
until it is respected.
• It is recommended that a significant amount of headroom be added to cushion against varying
channel capacity, so a more conservative constraint would be:
0.75 * Channel Capacity > SRT Stream Bandwidth * (100 + Bandwidth Overhead) ÷ 100
7. Determine if the SRT stream has been set up correctly.
• The best way to determine this is to set up a test SRT stream and look at the SRT Send Buffer
graph on the Statistics page of the source device. The send buffer value should never exceed the
SRT Latency boundary. If the two plot lines are close, increase the SRT Latency.
SRT Parameters
This section describes the various parameters that have an effect on an SRT stream's performance.
RTT is used as a guide when configuring Bandwidth Overhead and Latency. To find the RTT between two
devices, you can use the ping command. For example:
ping 198.51.100.20
56 data bytes 64 bytes from 198.51.100.20: seq=1 ttl=64
time=6.633 ms
In this example, the Response, or RTT, is 6.633 ms. You can also find the RTT for an active SRT streaming
session displayed on the Statistics page.
Latency
There is a time delay associated with sending packets over a (usually unpredictable) network. Because of
this delay, an SRT source device has to queue up the packets it sends in a buffer to make sure they are
available for transmission and re-transmission. At the other end, an SRT destination device has to
maintain its own buffer to store the incoming packets (which may come in any order) to make sure it has
the right packets in the right sequence for decoding and playback. SRT Latency is a fixed value (from 80
to 8000 ms) representing the maximum buffer size available for managing SRT packets.
An SRT source device’s buffers contain unacknowledged stream packets (those whose reception has not
been confirmed by the destination device).
An SRT destination device’s buffers contain stream packets that have been received and are waiting to
be handed off (e.g. to a decoding module).
The SRT Latency should be set so that the contents of the source device buffer (measured in msecs)
remain, on average, below that value, while ensuring that the destination device buffer never gets close
to zero.
The value used for SRT Latency is based on the characteristics of the current link. On a fairly good
network (0.1 to 0.2% loss), and assuming there is no significant burst loss (see Packet Loss Rate), a “rule
of thumb” for this value would be four times the RTT.
SRT Latency can be set on both the SRT source and destination devices. The higher of the two values is
used for the SRT stream.
Constant loss refers to the condition where a channel is losing packets at a constant rate. In such cases,
the SRT overhead is lower bound limited, such that:
Minimum Bandwidth Overhead = 1.65 * Packet Loss Rate
Burst loss refers to the condition where a channel is losing multiple consecutive packets, up to the
equivalent of the contents of the SRT latency buffer. In such cases, the SRT overhead is lower bound
limited, such that:
Burst losses that last longer than the SRT Latency will result in stream artifacts. SRT Latency should
always be set to a value above your worst case burst loss period.
RTT Multiplier
The RTT Multiplier is a value used in the calculation of SRT Latency. It reflects the relationship between
the degree of congestion on a network and the Round Trip Time. As network congestion increases, the
rate of exchange of SRT control packets (as well as retransmission of lost packets) also increases. Each of
these exchanges is limited by the RTT for that channel, and so to compensate SRT Latency must be
increased. The factor that determines this increase is the RTT Multiplier, such that:
SRT Latency = RTT Multiplier * RTT
The RTT Multiplier is an indication, based on the characteristics of the current link, of the maximum
number of times SRT will try to resend a packet before giving up. On a fairly good network (0.1 to 0.2%
loss), with no burst loss, a “rule of thumb” for this value would be approximately 4. Generally speaking,
the overhead needed to cope with continuous packet loss is really quite low at low loss rates.
Important Considerations
As described in the section on "Packet Loss", SRT retransmissions result from two types of packet loss:
Constant Loss, where a channel is losing packets at a constant rate, and Burst Loss, where a channel is
losing multiple consecutive packets.
Depending on your specific network conditions, your Latency and Bandwidth Overhead need to change.
Latency is a function of your round trip time and an RTT multiplier value, which is relatively constant,
depending on the background loss rate. If you've got a Constant Loss, it gives you a certain unique value
for Latency. But there's a second dependency. If you lose a whole group of consecutive packets (Burst
Loss) you need additional Bandwidth Overhead to recover from it.
Consider two scenarios. In the first, you're losing a specific number of packets over a given period of
time. For example, you are constantly losing 5% of the packets, but distributed over time. And that
increases the chances that retransmitted packets might be lost. To overcome this constant, distributed
packet loss, increasing the Latency value helps because it increases the chances of recovering
retransmitted packets.
And then we have the other scenario where a you lose an entire block of packets. For example, not a
single packet is received over a period of 100 milliseconds. Once the link is back, your stream picks up
where it left off, transferring let’s say at 10 megabits/sec. But you have to retransmit that missing block of
100 milliseconds on top of the stream. And maybe there is still some packet loss, so more retransmissions.
So you have a large amount of data to be transported within the latency buffer window. You may have to
double the data rate for a short period of time.
These two scenarios have different overhead requirements (for retransmissions) depending on your
network delay, and so they dominate at different latencies. For a lower latency the overhead due to burst
loss is going to dominate. For higher latency, the overhead due to continuous loss of traffic would
dominate. If you have a high network loss rate, you need more latency to recover, but then you don't
have to worry so much about burst loss. But at a low network loss rate, over a long period of time there is
a statistical probability that a burst loss may happen instantaneously, lasting for tens of milliseconds. To
manage such bursts, you have to be able to transmit at very high rates in a very short period of time,
because your SRT latency in these cases is usually very low — you can't get something for nothing. If you
don't set your overhead to be high at very low SRT latencies, then you won't be able to recover quickly
enough when there is a large outage.
The table below shows estimated RTT Multiplier and Minimum Overhead values based on a model that
considers the interdependency of the two different scenarios. Burst loss dominates at low RTT multiplier
values (gray cells), while constant loss dominates at high RTT multiplier values. Burst loss requires higher
overhead when the RTT multiplier is low, otherwise SRT cannot recover the lost data within one RTT
period. Continuous loss requires higher overhead values as packet loss increases. These two phenomena
combined inform the values below.
< 1% < 3% < 7% < < 12% < 21% < < < < < < <
10% 25% 27% 30% 39% 40% 47% 50%
Minimum RTT 3 4 5 6 7 8 9 10 11 13 16 18 19
Multiplier
Minimum 50% 34% 25% 20% 17% 27% 34% 37% 43% 64% 67% 89% 100%
Overhead
Note
The numbers in this table are intended as conservative guidance for achieving the greatest
reliability possible. They are based on a model using an earlier version of SRT. Keep in mind that
certain devices have better SRT performance profiles than others, and newer SRT versions may
incorporate more efficient retransmission algorithms.
Bandwidth Overhead
The control packets associated with an SRT stream do, of course, take up some of the available
bandwidth, as do any media packet retransmissions. When configuring an SRT stream, you will need to
specify a Bandwidth Overhead value to allow for this important factor. The portion of audio and video
content in the stream is determined by their respective bit rate settings, which are configured on the
audio and video encoders themselves. SRT Bandwidth Overhead is calculated as a percentage of the A/V
bit rate, such that the sum of the two represents a threshold bit rate, which is the maximum bandwidth
the SRT stream is expected to use.
The SRT Bandwidth Overhead is a percentage you assign, based in part on the quality of the network
over which you will be streaming. Noisier networks will require exchanging more control packets, as well
as resending media packets, and therefore a higher percentage value.
Note
SRT Bandwidth Overhead should not exceed 50%. The default value is typically 25%.
Let’s say you are streaming video at a bit rate of 1000 kbps, and audio at 128 kbps. This gives a total of
1128 kbps, which we will round up to 1200 kbps to account for any metadata and other ancillary data. This
is the Average Bandwidth, which is calculated automatically based on your actual output settings. If you
accept the default Bandwidth Overhead setting of 25%, then the total bandwidth reserved for the SRT
stream will be:
1200 + (25% * 1200) = 1500 kbps (1.5 Mbps)
This is the maximum bandwidth SRT will use. If there is no loss, only a slight overhead for control is used.
As long as this total SRT bandwidth is less than or equal to the bandwidth available between the SRT
source and destination devices, the stream should flow from one to the other without incident.
If encryption is enabled, the Passphrase entered on source and destination devices must match.
Parameter Description
Encryption This parameter specifies the AES (Advanced Encryption Standard) encryption
key length and cipher.
Range: AES-128, AES-256
Passphrase This specifies a string used to generate the AES encryption key wrapper via a
one-way function such that the encryption key wrapper used cannot be
guessed from knowledge of the password.
Range: 10 to 79 UTF-8 printable characters
Important
In SRT the passphrase is used for the exchange of encryption keys, not authentication. Setting a
passphrase on the receiver does not prevent it from accepting unencrypted streams.
Monitoring and understanding the data on the Statistics page can help you to tune and optimize your
SRT streaming performance. To display a graphical view of the data exchange between SRT source and
destination devices, click the Statistics button on the page used to configure SRT streaming.
Statistics
The following descriptions of the various sections of the Statistics page use images from an SRT
exchange between a Makito X Encoder (source) and Makito X Decoder (destination). By comparing the
related sections between SRT source and destination devices, you can determine performance
characteristics and identify opportunities to optimize the stream.
Note
You need an up-to-date version of Firefox, Chrome (Chromium), Safari (WebKit), or IE9 (or higher)
to be able to view the graphics on the SRT statistics page.
The main Statistics section provides an overview of an SRT stream’s condition: its current state, the
packets and bytes sent and received, the current bit rate, and (in the case of the source device), how long
the stream has been active.
Makito X Encoder (SRT source device) Makito X Decoder (SRT destination device)
✓ Unsent Packets Number of UDP packets not sent for that stream
(displayed only when not 0).
✓ Unsent Bytes Number of bytes not sent for that stream (displayed
only when not 0).
✓ Last Error The last error reported by the SRT subsystem. All
errors are recorded in a log file (see SRT Logs).
SRT
The SRT section provides a detailed look at the SRT traffic.
Makito X Encoder (SRT source device) Makito X Decoder (SRT destination device)
✓ ✓ Key Length Indicates the specified key length for AES encryption.
✓ Skipped Packets Packets that have arrived at the destination device too late,
or that never arrive at all. If the “time to play” for a packet
has passed, and it is either not at the decoder or arrives
after the content it is associated with has already played,
that packet is reported as “skipped” on the destination
device. Usually this results in some type of video artifact (a
replayed frame or video blocking).
✓ Max Bandwidth Maximum bandwidth used by the source device for this SRT
stream (i.e. the current total of audio/video bit rate plus
ancillary data plus the SRT Bandwidth Overhead).
✓ Path Max Bandwidth The same value as Link Bandwidth. The destination device
sends the value to the source device with an
acknowledgment packet.
✓ ✓ RTT Round Trip Time (RTT) is the time it would take for a packet
to travel from a specific source to a specific destination and
back again. In SRT, this is measured as the time it takes for
the destination device to send an acknowledgment (ACK)
packet, and then receive a corresponding confirmation
(ACKACK) packet.
Note that if you change the bit rate from 6 to 2 Mb/s, the
byte count will change but a 2 second buffer will remain 2
seconds.
Tip
When adjusting the SRT parameters, ignore any spikes or other variations that appear on the
Statistics charts at the time of the change.
Delays
The Delays section provides a graphical overview of how the SRT stream buffers are handling the traffic.
Makito X Encoder (SRT source device) Makito X Decoder (SRT destination device)
✓ Encoder Send Buffer Check this box to view a plot of the contents (in
ms) of the encoder buffer over time (blue line in the
graph).
✓ Decoder Receive Buffer Check this box to view a plot of the contents (in
ms) of the decoder buffer over time (blue line in the
graph).
✓ ✓ Round Trip Time Check this box to view a plot of the RTT value over
time (white line in the graph).
lowered). If the Encoder Send Buffer only occasionally goes to or above the Latency value, then the
SRT Latency should be increased.
Bandwidth Used
The Bandwidth Used section provides a graphical overview of how the network bandwidth available to
the SRT stream is actually being used, in terms of the rate at which packets are sent, received, resent and
lost, as well as the data volume (rate × time).
Makito X Encoder (SRT source device) Makito X Decoder (SRT destination device)
✓ Send Rate Check this box to view a plot of the SRT source device’s
outbound bitrate over time (blue line in the graph).
✓ Receive Rate Check this box to view a plot of the SRT destination
device’s inbound bitrate over time (blue line in the
graph).
✓ Retransmit Rate Check this box to view a plot of the rate at which the SRT
source device is resending packets (orange line in the
graph).
✓ Lost Rate Check this box to view a plot of the rate at which SRT
source device is reporting lost packets (orange line in the
graph).
The data is displayed in either 60 or 120 columns (depending on the time span being displayed), where
each column represents an average value based on the time span being displayed.
Last 5 minutes 60
Last 60 minutes 60
For example, if the graph is showing data from the last 5 minutes (300 seconds), then it will contain 300
÷ 5 sec. = 60 samples over 60 columns (this works out nicely to one sample per column).
If the graph is showing data from the last 24 hours (86400 seconds), then it will contain 86400 ÷ 5 sec. =
17280 samples over 120 columns. Each column contains the average value of 144 samples (17280 ÷ 120).
SRT Logs
Data associated with an SRT stream is continuously logged (one log per stream). The log file will contain
time-stamped entries at 5 second intervals for up to a maximum of 24 hours. The sampling is non-
contiguous (no log entries are made when the stream is inactive). For example, the log might contain
data covering 4 days, but only for 6 hours per day.
From the Statistics page, you can download the log file in CSV format, which may be used with
applications such as Microsoft Excel or third party log analysis software. It can be useful to examine logs
to help with troubleshooting, or to determine how available bandwidth is being used over time, which
may allow you to better control associated costs.
The table below shows a sample log with a few rows of data:
SRT was released as open source software in April 2017, under version 2.0 of the Mozilla Public License. It
is royalty-free and available to everyone on GitHub.
Can SRT carry modulated signals like DVB-T and T2, or complex streams like BTS and MPTS over the
internet?
SRT has no specific bandwidth limitation, although a bandwidth limit can be set, if required. All other
limits result from the network parameters and characteristics of the devices that handle it. Keep in mind
that the bandwidth overhead setting is there for re-sending traffic in parallel with new content, and that
you have to take that into account with a rigidly rate constrained transmission channel. It should actually
be easier with DVB-T/T2 than with a random VBR stream, but don’t send a 17 Mbps payload over an 18
Mbps link with a 25% SRT overhead.
Would SRT be suitable for VSAT SCPC C-Band, SCPC Ku-Band or iDirect Ku-Band transmission? Would
latency would be adversely affected?
The RTT for VSAT links is over 540 2 ms, and can be much higher. Using a conservative value of 800 ms,
we can calculate the single transmission time (STT) as RTT/2 = 400 ms. So for a lost packet you have:
STT (originally sent packet) + packet delta (time to realize packet loss) + STT (send loss report) + packet
delta (sender decides to retransmit) + STT (sending the lost packet again) = 3*STT plus some packet
interval time (100 ms max). Multiplying this by 2 gives a minimum safe latency of 2600 ms to allow for
reliable packet recovery (providing that no packet gets lost twice, and no subsequent loss happens). In
practical terms, it would be better to use an even higher latency value, like 4000 ms.
Note
In some cases, first-time retransmission may also fail. SRT compensates for this using the
NAKREPORT feature (the receiver, when the packet is not received in the expected timeframe,
sends the loss report again). For the connection to take advantage of this feature, extended time
for second-time retransmission should be taken into account.
With multiple channel per carrier (MCPC) signals, as described above for VSAT, you need to allow
sufficient latency/buffer for the 4 x RTT requirement. Here is some additional guidance:
1. Use traffic shaping and idle cells to provide a CBR stream.
2. Ensure the stream MTU is below the link MTU. You may need to do a ping test with the DF flag set.
3. Leave yourself some bandwidth overhead above your SRT bandwidth, and remember that you’re
dealing with video bandwidth + audio bandwidth + TS muxing overhead + Ethernet/IP packet
overhead + SRT resent buffering. If you have signal degradation on SAT and experience persistent
packet loss, you can end up with SRT retransmissions spiking, and you never catch up.
If there’s an SRT connection between two devices and the network goes down, does the SRT protocol
automatically reconnect when the network comes up?
Once the stream has started the protocol works by establishing a connection with regular
acknowledgement and negative acknowledgement packets between the peers. In addition, keepalive
packets are exchanged as necessary to maintain the connection. For network interruptions of less than 1
second, the SRT connection may be re-established automatically. For longer interruptions, it is up to the
application (not SRT) to re-establish the connection. All Haivision products will try to reconnect
immediately, and retry indefinitely, until a user deletes the connection from their configuration. Does SRT
encrypt a stream at the packet level? No, SRT encryption occurs at the payload level. Only the data that
are being transported are encrypted. All control information is sent in the clear (this is a security “best
practice” to avoid repeated patterns that would weaken the encryption). Note that encryption does not
have an impact on the SRT stream itself, but does place a burden on the CPUs of the sender and receiver,
which is proportional to the number of streams and their bitrates.
No, SRT encryption occurs at the payload level. Only the data that are being transported are encrypted.
All control information is sent in the clear (this is a security “best practice” to avoid repeated patterns
that would weaken the encryption). Note that encryption does not have an impact on the SRT stream
itself, but does place a burden on the CPUs of the sender and receiver, which is proportional to the
number of streams and their bitrates.
All versions of SRT are backwards compatible. When establishing a connection, a newer version of SRT
will “fall back” to the capabilities of the older version.
Pro-MPEG FEC is not used by SRT. The basic error recovery mechanism in SRT is retransmission requests,
in the form of NAK (Negative Acknowledgment) packets. Whenever missing packets are detected, the
receiver sends NAKs with information about those packets back to the sender. The sender responds by
re-sending the missing packets.
As of version 1.4 SRT has a general-purpose mechanism for injecting extra processing instructions at the
beginning and/or end of a transmission. This mechanism, based on packet filtering, was originally created
as a means to add Forward Error Correction (FEC) in SRT, but can be extended for other uses. The built-
in FEC filter implements standard XOR-based FEC protection operation. It is also possible to set an ARQ
level that decides how FEC should cooperate with retransmission.
Is SRT FIPS 140-2 certified? Has it been evaluated against the FIPS 140-2 standard?
SRT uses only FIPS-140.2 approved algorithms, but it can be linked with any version of OpenSSL and
other crypto libraries. If a FIPS mode does not exist in the linked library, SRT cannot enable it. Note that
FIPS 140.2 does not guarantee tamper resistance across site-to-site links. For example, even if using a
FIPS 140.2-validated crypto lib, SRT does not provide perfect forward secrecy.
What would be an example of a firewall policy that would cause Rendezvous mode to fail, assuming
outbound ports are opened and inbound traffic from established connections is allowed? Any edge-
case policies that would explicitly prohibit traffic, or architectures that might confuse the connection
process?
Dynamic Port Address Translation (PAT, aka port mangling) will foil Rendezvous mode, as will firewalls
that do not have UDP “fixup” or session tracking. Remember that UDP itself is a stateless protocol. All
Rendezvous mode does is to force both sender and receiver to connect to each other simultaneously on
the same source and destination port. The devices need to connect to each other’s public IP addresses/
ports, and the firewalls have to allow that traffic through to the internal IP address/port of each device. If
the firewalls are very strict or are randomizing the addresses, Rendezvous will not work.
SRT connections can only get broken if the connection is timed out (no packet was sent to particular
device from its peer for some period of time, about 5 seconds). As of SRT version 1.3.1, a connection
timeout may also happen when the receiver buffer is not emptied by the receiver application fast enough,
to prevent going into an unrecoverable state.
The continuous corruption of an input source will cause the encoder process to restart, consequently
resetting the SRT connection.
Often when a connection is broken, it is simply because the socket has been closed. If the application
manages errors correctly, the broken connection is reported with an appropriate error number and a
clear message.
SRT is based on the User Datagram Protocol (UDP), but has its own mechanisms to ensure real-time
delivery of media streams over noisy networks such as the Internet.
The source and destination device UDP ports are configurable. Each stream only requires a single port.
The port is user-defined and can be between 1025 and 65,535. The auto-generated routes on Haivision
Media Platform are created in the 31000–31100 range. Specific port requirements for firewalls may
depend on the security policies of your organization. Consult with your network system administrator.
What is the bandwidth overhead requirement for a connection traversing the Internet?
BW Overhead specifies the maximum stream bandwidth overhead that can be used for recovery of lost
packets. The default BW Overhead is 25%. Depending on your settings, SRT will either retransmit lost
packets quickly (thereby using more bandwidth) or over a longer period (requiring less bandwidth but
resulting in higher latency).
Encryption does not affect the bandwidth. However, applying encryption is a processor-intensive task,
and may have an impact on the number and bit rate of the streams an encoder is able to output.
SRT can be used over wireless networks, WiMANs (Wireless Metro Area Networks), LANs, private WANs,
or the public Internet.
Does the number of “Lost Packets” increment even when an SRT retransmit is successful? Would “Lost
Packets” result in visual artifacts or quality issues?
“Lost Packets” and the related “Skipped Packets” are statistics reported by an SRT decoder:
• Lost Packets: A hole in the packet sequence has been detected. A request to re-transmit the lost
packet has been sent to the source device. This lost packet may (or may not) be recovered by the
re-transmit request.
• Skipped Packets: The time to play the packet has arrived and the lost packet was not recovered, so
the decoder will continue playing without it. A video or audio artifact may result from a skipped
packet.
How can I determine the SRT bandwidth requirements from a decoder back to an encoder?
SRT back-channel control packets from the decoder to encoder take up a minimum of ~40 Kbps of
bandwidth when the channel conditions are perfect. If there are lost packets on the link, then the SRT
receiver will generate more signal traffic, proportional to the lost packet rate. A single lost packet will
consume about 400 bps of the available bandwidth on the receiver side.
From the decoder back to the encoder, bandwidth usage will increase linearly with the packet loss rate.
How can I determine the bandwidth available between two endpoints before starting an SRT stream?
If you have established an SRT stream, you can view bandwidth information on the Statistics pages of the
source and destination devices.
If no SRT stream is currently running, you can use the iperf bandwidth measurement utility to get
bandwidth and jitter information. iperf is available from the command line on either the Makito X Encoder
or Decoder. You need to specify the port number as well as stop the encoder stream and decoder before
using iperf (to release the ports).
Note
iperf should never be run on a system carrying production video.
where “decoder#” is the number of the decoder instance (you must stop all instances), and “port#” is the
same as the port opened on the firewall. The “-u” parameter specifies the use of UDP packets (using TCP
would cause the measured available bandwidth value to be lower).
where “stream#” is the number of the stream (you must stop all stream), “BW” is the appropriate
bandwidth in Mbps, and “IP ADDRESS OF MXD” is the IP address of the decoder (use the public facing IP
if traversing firewalls).
Example:
iperf -c 198.51.100.20 -u -b 5.5m -i 1 -p 20000
Result:
0.0-10.1 sec 6.50 MBytes 5.41 Mbits/sec 0.247 ms 38/ 4678 (0.81%)
where 5.41 Mbits/second is the real bandwidth, 0.247 ms is the jitter, and 0.81% is the percentage of lost
packets.
Note that when using iperf in a UDP client/server configuration, the “server” listens for connections from
clients, but it is the “client” that generates the traffic.
If you set a Haivision Media Gateway to SRT Listener mode, multiple decoders can simultaneously “call”
the same port. However, if you set a Makito X Encoder to Listener mode, you can only connect with a
single decoder. If the MXE can handle up to 300 Mbps of streaming, why can’t multiple simultaneous
Callers connect to the same MXE port?
The 300 Mbps limit on the MXE applies to TS/UDP streams. The TS/SRT streams are much more resource
intensive (30-90 Mbps depending on the system configuration), so there is a much lower limit to the
number of SRT streams that can be created on the MXE vs the Media Gateway.
No. SRT is for point-to-point connections. It does not support multicast. If you need to have an SRT
stream delivered to multiple decoders, you can use the Haivision Media Gateway.
I’m seeing a “sawtooth” pattern on the Statistics page for my encoder. What does that mean?
It means the decoder is not acknowledging that it has received the packets sent from the encoder. The
encoder keeps the packets it has sent in its buffer until it receives a response (this takes at least the
equivalent of the round trip time). If the encoder receives acknowledgments promptly from the decoder,
the buffer remains relatively empty. Otherwise, the buffer gradually fills until it reaches a point where it
must drop the unacknowledged packets, creating the characteristic sawtooth pattern.
This typically occurs when you don't have enough bandwidth to transmit. The buffer value will increase
up to the point where it can no longer keep up, and then will drop in a classic sawtooth pattern. In such
cases, try increasing the SRT overhead, or lowering your video bit rate.
What does it mean when I see the blue line drop below the white line on the Statistics page for my
decoder?
The decoder has a buffer (blue line) that it uses to hold on to what it has received to allow time for
retransmission of missing packets. Normally the contents of this buffer (measured in milliseconds) should
be between the latency (orange line) configured for the SRT stream, and the round trip time value (white
line). If it falls below the RTT, this means the decoder has no packets to play, and not enough time to ask
for more. So the video output will have some sort of artifact, such as a replayed frame or a blocking
artifact. It could be that the encoder hasn't given any packets to the decoder, indicating a problem at the
source or in the intervening network. But it might just be because there is nothing to display.
If the rate is going up constantly, it means that you haven’t configured enough bandwidth overhead or
latency. Check to see if you are streaming too far above the latency value on the encoder.
Sometimes a packet, let’s call it Packet B, arrives at the decoder ahead of time and sits in the queue,
ready to play. But if the packet that should be played immediately before it (Packet A) arrives late (or
never), the decoder skips Packet B sitting in the queue. In other words, the time to play for the associated
packet has passed, and it is either not at the decoder or arrives after any associated later content has
already played. Usually this means some type of video artifact also occurs (a replayed frame or video
blocking artifacts).
You could think of a skipped packet as a packet that the decoder drops, except that it doesn’t tell the
encoder. The decoder sends a positive ACK, even though the packet is “lost” from the point of view of
the decoder. It might drop an entire frame because it is corrupted, but the encoder doesn't know it. This
is because when things are going badly, the last thing you want to do is to increase the overhead traffic.
A packet has a certain time-to-live, and if it doesn't play before that time then it is skipped.
You might see the number of skipped packets increasing on the decoder, without a corresponding effect
on the number of dropped packets seen on the encoder. If the number of skipped packets on the
decoder increases slowly, you should increase the SRT Latency. If the number of skipped packets on the
decoder increments in large jumps, the best thing to do is lower your video bitrate, or increase your
Bandwidth Overhead % if you have available capacity.
You can control the latency, the bit rate, and the overhead percentage on the source device. But you can
configure the latency on the destination device as well. The two devices will settle on the maximum value.
How do I decide whether to boost the latency or lower the bit rate?
You have to decide what is most important for you: quality or latency. If low latency is more important to
you, then you may see that you are not using your bandwidth as effectively, and image quality may not
be optimal. If you want more quality, you need to use the maximum available bandwidth, with minimal
overhead, and therefore more latency. An SRT transmission needs either bandwidth or time. So if you are
using your maximum bandwidth at a higher bitrate because you want higher quality, you’ll have to allow
more time to recover from faults.
If the delay/latency setting is too low, you will see blue lines (Send Buffer) climbing up past the orange
line (Latency) in the graph on the encoder. This will be reflected at the destination as visible artifacts,
corresponding to skipped packets on the decoder.
If I see my Send Buffer repeatedly spiking by one or two seconds, by how much should I increase my
latency?
In early SRT versions, there was little tolerance. The Send Buffer values had to remain below the Latency
to obtain good results. In more recent SRT versions the Send Buffer can occasionally go over one second
without being dropped. However, this can cause a problem at the other end, because while the packets
are shown as “delivered” on the encoder, they may not arrive at the other end.
Can I use the KulaByte Dynamic Stream Shaping feature to dynamically adjust the bit rate of an SRT
stream based on the available bandwidth? Could a Makito X Decoder handle such a stream?
KulaByte’s DSS functionality is supported only on RTMP/HLS streams. It is not supported with SRT.
To maintain an SRT connection, control packets must be sent at an interval of 10 ms (max). When a
destination device receives a media packet, it acknowledges the reception by returning an “ACK” control
packet. If the interval between “ACK” packets is greater than 10 ms, “keep alive” packets are sent to keep
the connection open.
Unlike with TCP, if an SRT connection is broken the source device will be unaware that the destination is
not receiving packets for up to several seconds before the connection is re-established.
Why do I have to specify an Outbound NAT Source Port on my source device’s firewall?
Since UDP is not session based, some firewalls will randomize source UDP. Let’s say your source is a
Makito X Encoder, with Firewall A. If you send from a random port on the Makito X to port 20000 on the
destination, Firewall A may not use the same port. The assumption is UDP is not bidirectional. Firewall
behaviour differs from one to the next, so this is tricky. Return traffic has to come back on the same port
it was sent from. So if you sent from X.X.53.36 port 20000, the return traffic has to go back to X.X.53.36
port 20000. This is why the Outbound NAT Source Port setting exists. Some firewalls will randomize the
source on the outbound interface.
In some cases, Rendezvous mode may be required. Rendezvous mode sets the port to be same on both
sides; firewalls will almost always allow traffic returned on the same port. Cisco has a “UDP Fixup” setting
that tells firewalls to treat UDP packets as if they are session based.
It is important to consider both network (NAT) and port (PAT) address translation. It is possible, for
example, for public port 20000 on a firewall to be mapped to 2500 on an internal address. Port Address
Translation (PAT), or port rewriting, can actually be made to work in cases where the mapping is fixed.
However, many firewalls randomize the source port through “packet mangling.”
1. When the sender receives the loss report, the lost packet is scheduled for retransmission (which
has a higher priority than regular transmission) at the next possible opportunity. But this is not
instantaneous. There may be a "sleep" delay as required by the bandwidth limit. Earlier regular data
might already be underway in the network, or in the system network buffer. So the retransmitted
packet is only sent as quickly as the application can react upon receiving the loss report.
2. https://www.satellitetoday.com/telecom/2009/09/01/minimizing-latency-in-satellite-networks/
References
Secure Reliable Transport (SRT) Protocol Technical Overview
https://www.haivision.com/resources/white-paper/srt-protocol-technical-overview/
Videos
SRT Alliance Videos
https://www.srtalliance.org/videos/
https://www.haivision.com/resources/webinars/srt-tuesday/
Haivision and Wowza Form SRT Alliance Open Source Low Latency Video Streaming Initiative [1:46]
https://www.youtube.com/watch?v=rJCzkXhoMEE&t=44s
https://www.youtube.com/watch?v=68Ug8s4JkVs&feature=youtu.be
Why you should learn about the SRT protocol and the SRT alliance [5:42]
https://www.youtube.com/watch?v=067v65JiCEg
How to setup SRT and Wowza Streaming Engine and Makito @ IBC 2017 [1:59]
https://www.youtube.com/watch?v=ykLu8h0yp6c
https://www.youtube.com/watch?v=I6G2iaTMuQo
https://www.youtube.com/watch?v=7d7bYB11kqA
https://www.youtube.com/watch?v=pNk_plbvy8Q&feature=emb_title
https://www.youtube.com/watch?v=SV_Nl43bXjQ
https://www.youtube.com/watch?v=QExiajdC0IY&feature=youtu.be&t=2966
https://www.youtube.com/watch?v=PuGn2cjIt04
Bandwidth Overhead The portion of the total bandwidth of an SRT stream that
is required for the exchange of control and recovery
packets. This value is usually calculated as a percentage
of the base (TS) output bitrate of an encoder (video +
audio + metadata + ancillary data). The sum of the base
output rate plus the bandwidth overhead determines the
maximum bandwidth that can be used by the SRT
protocol.
Network Address Translation NAT A functionality that remaps one IP address to another.
Network Address Translation allows a device, such as a
router or firewall, to redirect traffic between the Internet
and a private network.
Packet Loss Packet loss occurs when data packets fail to reach their
destination. It can be caused by a number of factors
including signal degradation over a network, channel
congestion, corruption, faulty networking hardware,
faulty network drivers, or normal routing routines, such
as Dynamic Source Routing (DSR) in ad hoc networks.
Packet loss is measured as a percentage of packets lost
with respect to packets sent.
Packet Delay Variation PDV Packet Delay Variation is the difference in the end-to-
end, one-way delay between selected packets in a flow,
with any lost packets being ignored. [RFC 3393]
Virtual Private Network VPN A mechanism for connecting to a private network over a
public network like the Internet. Most commonly, a VPN
is used to provide a secure, encrypted tunnel through
which a remote (authorized) user can access a company
network.
Getting Help
General Support
North America (Toll-Free)
1 (877) 224-5445
International
1 (514) 334-5445
and choose from the following:
Sales - 1, Cloud Services - 3, Support - 4