VBrick Rev and DME Design Guide
VBrick Rev and DME Design Guide
Q3 2016
Copyright © 2016 VBrick Systems, Inc. All rights reserved.
This publication contains confidential, proprietary, and trade secret information. No part of this document may be copied,
photocopied, reproduced, translated, or reduced to any machine-readable or electronic format without prior written permission from
VBrick Systems, Inc. Information in this document is subject to change without notice and VBrick assumes no responsibility or
liability for any errors or inaccuracies. VBrick, Rev, and the VBrick logo are trademarks or registered trademarks of VBrick Systems,
Inc. in the United States and other countries. All other products or services mentioned in this document are identified by the
trademarks, service marks, or product names as designated by the companies who market those products. Inquiries should be
made directly to those companies. This document may also have links to third-party web pages that are beyond the control of
VBrick. The presence of such links does not imply that VBrick endorses or recommends the content of any third-party web pages.
VBrick acknowledges the use of third-party open source software and licenses in some VBrick products. This freely available source
code is posted at http://www.vbrick.com/opensource
About VBrick
VBrick believes in the power of video to transform the workplace. Its Rev® cloud video platform removes the technology and pricing
restraints that have held business back from tapping video’s clear advantage to persuade, inform and compel people, wherever they
are.
VBrick pioneered the next-generation of enterprise video through its Rev® cloud-native platform. Named the market leader in
Enterprise Video Webcasting by industry analysts Frost and Sullivan, VBrick’s platform allows organizations to use video
ubiquitously by converting it into bandwidth-efficient streams that can be securely viewed through a web browser from any
connected device. Built to leverage any cloud platform, organizations can reach audiences in the tens of thousands, compared with
a few hundred using traditional web conferencing services. VBrick Rev enables organizations to centrally integrate all of their video
sources, including video conferencing and unified communications, while delivering a dynamic, consumer-grade experience for
employees.
PREFACE ............................................................................................................................................................ 6
INTRODUCTION & CORE APPLICATIONS.............................................................................................................. 7
INTRODUCTION .............................................................................................................................................................. 7
EXECUTIVE WEBCAST ...................................................................................................................................................... 7
VIDEO-ON- DEMAND ...................................................................................................................................................... 7
UNIFIED COLLABORATION INTEGRATION ............................................................................................................................. 8
KEY COMPONENTS ............................................................................................................................................. 8
REV ............................................................................................................................................................................. 8
DISTRIBUTED MEDIA ENGINE / DME ................................................................................................................................. 9
TCS ............................................................................................................................................................................. 9
CISCO CLOUD CMR ...................................................................................................................................................... 10
REV ................................................................................................................................................................... 10
ARCHITECTURE ............................................................................................................................................................ 10
RUNTIME .................................................................................................................................................................... 11
MONGODB ................................................................................................................................................................ 11
ELASTIC SEARCH ........................................................................................................................................................... 12
VIDEO STORAGE ........................................................................................................................................................... 12
LOAD BALANCING......................................................................................................................................................... 13
HIGH AVAILABILITY ....................................................................................................................................................... 13
CAPABILITIES ............................................................................................................................................................... 17
Consumer-grade Interface .................................................................................................................................. 17
Video on Demand Portal ..................................................................................................................................... 17
Transcoding......................................................................................................................................................... 18
Self-service Webcasting ...................................................................................................................................... 18
Reporting and Analytics ...................................................................................................................................... 18
Security ............................................................................................................................................................... 18
CDN Integration & Device Control ...................................................................................................................... 19
Cloud VC Recording ............................................................................................................................................. 19
Executive Webcast
CEOs realize the power of personal video-based communications to inspire, motivate and forge
a common culture across their increasingly global organizations. However, web conferencing,
event services and homegrown solutions often deliver poor-quality video to only a fraction of
employees. VBrick's next-generation enterprise video platform gives executives the quality and
reach they demand across their own networks.
The VBrick solution supports high- quality HD video delivered seamlessly over the corporate
network, using a variety of ports and protocols, from adaptive streaming to IP multicast. This is
integrated with a secure user experience portal that supports robust user interaction including
panel panel-moderated Q and A, polls, chat, slides, etc. The integration with external CDNs
such as Akamai ensures the ability to deliver this experience on and off the corporate network.
Video-on- Demand
VBrick enables organizations to centrally manage huge libraries of video assets through a
system of intuitive, multi-level workflow management features. Admins control user
permissioning at the individual video level to ensure the right audiences have access to the right
content. Admins and authorized users can use drag and drop menus enable admins and
authorized users to easily upload large caches of captured video assets from end user devices,
including native upload from iOS and Android devices, which streamlining streamlines the ability
for users to engage in the organization’s video initiative through user-generated content. Users
Natively indexed content metadata allows for comprehensive search and reporting capabilities,
and Rev’s integration with the Cisco TelePresence Content Server (TCS), Cisco Acano, and
Cisco WebEx allows VBrick Rev to be the comprehensive video- on- demand source for all
video content.
Key Components
Rev
The VBrick Rev video management platform is the industry’s first cloud-native (i.e. fully
distributed architecture that can be leveraged on, or across, any - or any number of, cloud
platform providers) enterprise video platform. As such, VBrick Rev brings a level of performance
across all services (authentication, transcoding, workflow, etc.), scalability (Rev uses all
available virtual computing resources as a single pool), elasticity (Rev dynamically accesses
available computing resources for whatever service is needed at the time – such as web
services during the beginning of a webcast) and redundancy (Rev is fully redundant at the data
store, file store and runtime services level) – all capabilities that are not possible with server-
based platforms. This generational architectural advancement enables Rev to rapidly support
any number of clients (multi-tenancy), each with any number of viewers even during peak load
times - such as during mass audience live webcasts.
This bandwidth-friendly eCDN is a distinct VBrick advantage, often cited by customers as the
reason for switching from web-based video services, which quickly fail when too many viewers
at a corporate office each pull down their own unicast stream and swamp available bandwidth.
This proves out most dramatically during live webcasts – when thousands of employees log in
within a 10-minute window. Multicasting can save substantial network bandwidth when multiple
clients are accessing the same stream.
The DME product is available in three sizes as a fully managed virtual appliance using a
VMWare OVA format. Optional Cisco UCS hardware is available to match each size virtual
appliance.
TCS
With the Cisco TelePresence Content Server (TCS), your organization can record and stream
high-quality video and content for live and on-demand access. You can also distribute your
content, live or recorded, to any PC or portable media device or to the VBrick Rev Enterprise
video portal
Based on industry standards, the TelePresence Content Server interoperates with Cisco and
third-party H.323 and Session Initiation Protocol (SIP)-based video endpoints and multiparty
bridges. The Cisco TelePresence Server and Cisco TelePresence Multipoint Control Units
(MCUs) can connect to it, as well, to enable live and on-demand video streaming. The TCS is
also tightly integrated with Cisco TelePresence Management Suite for scheduling your
recordings.
Recordings from Cisco CMR can easily be delivered to the VBrick Rev video portal for viewing,
sharing, distribution and inclusion in a single repository of online video content.
Rev
Architecture
As previously noted, the Rev video management and webcasting platform can be deployed as a
software product hosted by virtual machines, or as a Software-as-a-Service offering. The
architectural information contained within this section is applicable to both offerings, but is most
relevant in practice to the on- premise / private cloud offering, as the cloud subscription
inherently includes underlying capacity for the purchased users.
Rev Runtime
MongoDB
Elastic Search
Video Storage
Each of these components can be deployed in a redundant, highly available manner (and are indeed
done so inherently in the Rev Cloud offering), and they collectively form the overall Rev application.
Web application
Security and access control
Media management
Transcoding
Logging
Workflow
Authorization
Message Bus & Clustering
In addition to the direct functions, the runtime layer is also the interface to the persistency layers
associated with the system including MongoDB, ElasticSearch, and the Video Storage layer.
For on- premise applications, the runtime layer is hosted via Windows 2012 R2 physical or
virtual servers, and can be clustered for high availability and/or can be functionally distributed to
dedicate nodes to individual functions (core services, transcode services, security services).
The runtime layer is inherently stateless.
MongoDB
The MongoDB layer is the primary persistency layer within the Rev ecosystem. It contains all
metadata associated to the system and its contents including:
System state
Local users & authentication information
Remote (LDAP/SSO) users, metadata only
Video Metadata
GUIDs
Titles
Descriptions
Access Control
Categories
Keywords
Tags
For on premise applications, the MongoDB layer is contained within a Linux virtual or physical
machine. The VBrick provided ISO installers leverage Ubuntu Linux by default. Red Hat
Enterprise Linux leveraging customer provided licenses is a supported configuration, albeit
considered a custom installation.
As with the runtime layer, the MongoDB layer can be installed in a single node or multi- node
installation to allow for high availability.
Elastic Search
The Elastic Search layer indexes the data available in the MongoDB layer and provides
searching capabilities. This is a persistency layer in that it provides critical services to the Rev
run time both for video access, browsing and playback and actual video searching, but unlike
the MongoDB, no state information is stored here, and if necessary the Elastic Search
information can be rebuilt directly from the MongoDB layer.
For on- premise applications, the Elastic Search layer is contained within a Linux virtual or
physical machine. The VBrick provided ISO installers leverage Ubuntu Linux by default. Red
Hat Enterprise Linux leveraging customer provided licenses is a supported configuration, albeit
considered a custom installation.
As with the runtime layer, the Elastic Search layer can be installed in a single node or multi
node installation to allow for high availability.
Video Storage
For on- premise applications, customers must provide video storage to the Rev runtime in a
format that can be mounted as a Windows 2012 Server drive letter or UNC path. The format of
this storage can range from simple hardware disks, to Network Attached Storage (NAS), to
redundant Storage Area Network (SAN) storage as long as it can be mounted by Windows via
SMB. The same network drive should be mounted on all runtime servers and should be
redundant and/or regularly backed up. (See the sizing considerations section for drive size and
performance requirements.).
Load Balancing
For cloud applications load balancing is inherent to the service. (See cloud sections for
additional positioning information).
For on- premise applications, customers have a choice of load balancing. The VBrick- provided
ISO installer includes the ability to lay down an additional Ubuntu virtual machine which that
includes an HA-Proxy load balancer. For on- premise installs expecting less than 5000
concurrent users, the included load balancer will be sufficient, although it does represent a
single point of failure.
For on- premise applications requiring concurrency greater than 5000 users or for customers
desiring a solution without a single point of failure, customers can leverage an external load
balancer such as an F5 or similar. The only requirement is that this load balancer support Web
Sockets. Sticky sessions are not required.
In either scenario, the load balancer is used to proxy initial connections to the Rev Runtime web
service. Work performed within the Rev Runtime and between the Rev Runtime and the
persistency layers is automatically load balanced already.
High Availability
For cloud applications, high availability is inherent to the service. (See cloud sections for
additional positioning information).
Rev
MongoDB
Elastic
Physical Server
In the above example, a single physical server hosts the Rev Runtime, the MongoDB and
Elastic persistency layers. A load balancer, external or internal, is not required, and video
storage can be as simple as block- level storage assigned to the Rev Runtime (although.
(Though it still could can be a NAS/SAN mount point, if desired). There is no redundancy built
into this system although it will be perfectly functional for a few thousand users (see sizing
section for more information).
Customers desiring an installation with either additional concurrency capacity or some basic
failover capability can opt for an architecture which that replicates these same components
across three physical machines. This configuration can be installed and configured via using
VBrick- provided ISO installers.
In this example, three physical machines, such as Cisco UCS hardware, each host a Hypervisor
layer, such as VMWare ESXi. Each service layer has been spread across at least two physical
machines for redundancy purposes. In this basic redundant configuration, the first Rev Runtime
server can serve as the video file host for the other via a standard windows SMB share.
Alternatively, both Rev runtime servers can access the same NAS/SAN mount point via SMB.
The MongoDB and Elastic persistency layers are spread across two VMs on separate physical
chassis, with the third machine hosting an arbiter service that is used in the event of a failover.
The load balancing in this configuration is provided via an HA-Proxy virtual machine, which - as
noted above - is a single point of failure.
Since eEnterprise customers will desire want a configuration that removes the single point of
failure of the HA-Proxy and the shared drive attached to the Rev Runtime server, so the
configuration is has been modified as follows:
Rev #1 Rev #2
Elastic #2 Elastic #1
MongoDB Arbiter
An enterprise- class load balancer, such as an F5, is used to proxy inbound connections to the
Rev runtime servers, and a File store via NAS or SAN is used to provide the Video file storage.
This is the minimum recommend configuration for enterprise deployments.
This configuration can also be scaled linearly to provide for additional concurrency needs. (See
sizing section for more information on quantities required). Additional physical and virtual
machines can be added as needed.
Customers who opt for more complex virtual deployments leveraging virtualization products that
abstract the physical layer will still need the minimum number of virtual machines as depicted in
this configuration, but will not need to consciously place them on physical servers, as indicated,
if the virtualization layer already provides hardware- level redundancy.
Consumer-grade Interface
Rev’s interface delights viewers and increases viewer engagement. : Rev’s streamlined, modern
user interface uses the latest web technologies to deliver an experience behind the firewall that
meets employee expectations formed from their experiences using popular consumer sites,
such as Netflix, YouTube and Vimeo. Rev uses HTML 5, CSS 3, Angular JS to create a
streamlined, dynamic UI, and the platform’s use of web sockets -, which keeps the connection
between the client and server in an open state - delivers the “no refresh” experience that is
universal on mainstream consumer websites. Rev’s design language uses the concept of video
sliders, which enable end users to play featured videos right in the slider. This concept of sliders
is repeated throughout Rev for video on demand (VOD), live IPTV content, and upcoming
webcasts and live events.
Rev enables organizations to centrally manage huge libraries of video assets through a system
of intuitive, multi-level workflow management features. Admins control user permissioning at the
individual video level to ensure the right audiences have access to the right content. Admins
and authorized users can use drag and drop menus to easily upload large caches of captured
video assets from end user devices, including native upload from iOS and Android devices,
which streamlines the ability for users to engage in the organization’s video initiative through
user-generated content. Users can also take advantage of menus that enable automated, batch
upload of any video file (from any camera, even consumer) that is in an MP4 format.
Drag and drop menus enable admins and authorized users to easily upload large caches of
captured video assets from end user devices, including native upload from iOS and Android
devices, streamlining the ability for users to engage in the organization’s video initiative through
user-generated content; and automated, batch upload of any video file (from any camera, even
consumer) in an MP4 format.
Transcoding is built into Rev natively. Videos uploaded or recorded in Rev are automatically
transcoded into the format that the administrator has pre-selected. Rev simplifies transcoding
for administrators by enabling pre-set transcoding profiles. Customers can select from a handful
of pre-defined presets or create as many custom transcoding profiles as needed, which include
including adaptive bit rate formats, such as HLS. Behind the scenes, Rev will match the end
user’s network location, device and other data with the versions optimized for that person’s
environment – from smartphones on poor connections to large-screen displays using the HQ
WAN.
Self-service Webcasting
Rev’s self-service webcasting workflows enable any organization to become their own internal
live webcasting platform. Customers using Rev comment about how easy it is to use VBrick
Rev’s Presenter interface. Rev abstracts many of the complex network distribution, encoding
and permissioning steps required by older platforms. Admins can schedule a webcast using the
system’s calendar, and VBrick Rev reserves all required capture sources.
Rev provides a range of video, viewer and system analytics and reporting – all in real time.
Using the cloud native architecture of Rev, users do not have to wait for summary reports at the
conclusion of an event, and each new, on-demand view automatically increments the reporting.
Rev tracks key metrics useful for content creators of on-demand video, including views over
time, video viewing completion rates per video, viewer engagement – (which graphs viewer drop
off over a video’s timeline), and a breakdown of viewer device types and browsers. Embedded
views are also included in these metrics.
Security
Rev is the interface to enterprise grade security and authentication offerings. Rev can directly
connect to Active Directory servers using the LDAP protocol to integrate with corporate
credentialing systems. Rev also supports Single Sign- On workflows via SAML 2.0
Rev integrates directly with both enterprise content delivery networks created by products such
as the Rev DME, as well as external content delivery networks such as Akamai for both live
streaming and distribution of on demand content. Rev’s device communication protocols allow
caching and source devices to be in constant communication with Rev thus supporting a ‘single
pane of glass’ view of your entire video distribution network.
Cloud VC Recording
All Rev Cloud customers include access to Rev’s native Video Conferencing recording
capability. Using this capability, Rev Cloud can dial the publicly available SIP address of any
compatible VC end point or bridge. Rev will turn this call into a high quality recording including
both video and content-share which can then be made available as a VOD asset. Please see
Rev online documentation for the latest information regarding compatible VC end points.
For example, you can input RTP and TS (transport streams) into the DME and output those
same streams as RTMP (Flash) or HLS (for mobile devices and desktops). The DME also
provides video content caching, storage, and serving to ensure that stored content is delivered
from a DME as close to the end user as possible.
Architecture
The Distributed Media Engine is deployed as a virtual appliance and delivered from VBrick as
either an ESXi- based OVA file or a HyperV compatible virtual image. The underlying OS is a
highly customized and secured Linux installation – no direct shell access is provided, as this is a
hardened virtual appliance.
The virtualized version of the DME runs in either a VMware vSphere ESXi environment ESXi
5.1 (Update 2 & 3), ESXi 5.5, ESXi 5.5 (Update 1 & 2) or Hyper-V for Windows Server 2012,
2012 R2 or beyond environment. ESXi 6.0 support is coming soon.
The DME is available in three software levels: Small (7530), Medium (7550) and Large (7570).
Each has different virtual hardware requirements and specific streaming capacity capabilities (s.
See sizing section for more information). License upgrades from small and medium DMEs are
available to the larger sizes.
Live Streaming
The Distributed Media Engine includes several live streaming servers, which allow the ingestion
and output of live streaming video. The use of these servers allows a DME to serve as a live
stream reflection device, receiving a single stream from a source such as a TCS or Encoder and
relaying it to another DME, many DMEs, or many clients. The DME also has the capability to
transform streams within the live streaming server in a variety of ways (see further sections).
Default
Stream Notes
Port
This is the preferred method for providing stream input to the
DME. In this scenario the DME input is a live stream push from
RTMP Push 1935 an RTMP transmitter. Common examples of sources that
produce the RTMP live stream push include H.264 encoders,
VB9000, another DME, Cisco TCS, and a Flash Media Live
Default
Stream Notes
Port
Live streams content can be served via unicast RTMP. Note that
the port generally will not have to be defined in the URL provided
the default port 1935 is used. You can play the stream in a Flash
player using a URL similar to the following:
rtmp://server:port/application/publishing_point
RTMP Out 1935
For live streams the publishing point is the stream name and the
application is typically “live”. For stored the publishing point is the
file name and the application is “vod”. No explicit configuration of
this option is required.
You can serve available live streams and stored files via unicast
RTSP/TS. Note that the port must be explicitly identified in the
TS via RTSP 5544
URL. The port required is the Multi Protocol server port - default
5544.
The Multi-Protocol server on the DME serves live or stored
content using the RTSP/RTP protocol. You can play the stream
in StreamPlayer, QuickTime, or VLC using a URL similar to this:
rtsp://server:port/
Since the Multi-Protocol server uses a non-standard RTSP port
(default 5544), the port number is required in the URL. There are
two use cases for serving RTSP. Out-4 should be used for
As optimal stream stability, but if many simultaneous users are
RTP Out
Configured expected, the equivalent Configure a DME Stream DME Admin
Guide 49 Out-3 is preferred.
There are three possible protocols used for RTP serving: UDP;
TCP using RTSP interleaved; TCP using HTTP tunneling. Out-4
supports all three of these options while Out-3 does not support
HTTP tunneling. This difference may determine which
RTSP/RTP server to utilize.
The RTP server on the DME serves live or stored content using
the RTSP/RTP protocol. You can play the stream in
StreamPlayer, QuickTime, or VLC using a URL similar to this:
rtsp://server:port/
HTTP Streaming
The DME includes an HTTP streaming server for serving live and on-demand http- based video
streams to end clients. An HLS (HTTP Live Streaming) stream is essentially a set of transport
stream files made from an input H.264 stream with a playlist, so that it can be played on Apple
iPad/iPhone/iPad devices, Android devices, and Mac/PC desktop/laptop computers. HLS is a
The HLS playlist can be generated can either be from a single input stream or multiple input
streams. Multiple streams are useful in varying bandwidth environments. If you need to create
an adaptive playlist that allows the player to switch between multiple rate streams to adapt to
the fluctuating bandwidth, you need to create multiple HLS output streams - all with the same
Master Playlist Name. The playlist generated can vary depending on the configuration. Since
the segments must be generated on an IDR (Key Frame) boundary, the source must be
producing IDR frames at a regular interval in the stream. It is helpful to know how often IDR
frames are being inserted into stream from the source and it is a good idea to set a Minimum
Segment Length that is a multiple of IDR interval number. Larger segment sizes increase
latency.
The default settings will create a latency of about 30 seconds (a common latency for HLS). This
is probably optimal in terms of IDR frame interval/segment sizes. You can reduce latency by
forcing the incoming IDR interval to 11, and setting the minimum segment length to 1, but this
will make the source, the DME, and the client work much harder than they may need to.
It is important to note that while a DME can convert an incoming live stream (Unicast or
Multicast) as listed in the input table to a HTTP based stream (see Transmuxing section), it is
unable to use an HTTP stream as a source for bit-based streaming protocols.
If a valid SSL certificate is installed on the DME and a FQDN set, then the DME will by default
serve HLS streams via HTTPS and FQDN.
The DME’s HTTP streaming server is also used as the source for streaming HLS video on
demand files. (See VOD distribution section).
Transmuxing
Transmuxing is the process whereby a digital bit stream is converted from one file format or
streaming protocol to another—without changing the compression method (as opposed to
transcoding, which actually changes the compression method). The DME transmuxes streams;
it does not transcode streams. An example of transmuxing is when a unicast stream is
Transrating
Transrating is the process where a digital bit stream is converted from one bit rate to another
without changing the compression. An example of transrating is when a high bit rate stream is
converted into multiple, lower bit rate streams that can be delivered for delivery to mobile
devices via HLS. Note that the DME by default does not change the resolution of the source
stream, although the receiving device will generally display the stream at its preferred
resolution.
Recording
The DME includes a stream recording server that can, upon command, write existing video
streams to disk. If this recording is initiated by a connected Rev cluster (as it is in most cases),
once recording is completed, it is automatically uploaded to the Rev cluster via API and
associated with the user who initiated the recording.
EdgeIngest
EdgeIngest easily allows admins to bulk ingest content up into the VBrick Rev system. To do
this, the admin generates a metadata file for each media file to upload (JSON formatted as
described below) and then places the files into a specific directory within the DME. The DME
takes over from there and copies the contents up to Rev. This is a simple and handy method for
uploading Video on Demand (VOD) content. This feature is limited to VBrick Rev.
Two files are needed for each video EdgeIngest upload; the video file itself in .mp4 format and a
corresponding metadata file in JSON format with the exact same name. The Rev API will use
these two files to upload the video to Rev’s interface; all other file types will be ignored. For
example, for a video named VirginiaVideo, two files will be supplied: VirginiaVideo.mp4 and
VirginiaVideo.json.
Flash Multicast
VBrick has licensed Adobe Flash technology and has implemented the Adobe Flash Multicast
streaming protocol, RTMFP. This is natively integrated with the VBrick DME and requires no
additional purchases of an Adobe Media Server nor does it requires extra licenses. This feature
is in addition to the multicast capabilities described in the streaming protocol sections above.
The Flash Multicast protocol is the recommended multicast streaming protocol for most
deployments, offers a number of inherent benefits and is the recommended multicast streaming
protocol for most deployments. First, the RTMFP protocol is encrypted on the wire using AES
encryption and cannot be played without the encryption key contained within a manifest file.
Second, the RTMFP protocol can be played on Mac and PC computers without any proprietary
video players or plugins. For example, Google Chrome and Microsoft Edge have deprecated
support for legacy NPAPI plugins leaving no viable solution for proprietary multicast player
plugins or forcing proprietary players to be run in Java Applets externally. With Flash Multicast,
the native version of the Flash plugin that comes with these browsers simply can play multicast
streams out of the box. RTMFP provides the largest native browser support of any multicast
video streaming protocol.
The all cases where a DME is set as to be a VOD playback device, Rev Zone logic is used to
directs a user playing back content from their local DME. This could be content that the DME
has already has (pre-positioned) or content that the DME can fetch from a peer (Rev Mesh).
DMEs not configured for VOD delivery will never receive a VOD playback request from a Rev
user.
When viewers request content from a DME, the DME will first check locally for content. If the
content is not found, then the local DME will check the Rev Mesh (peer DMEs) for the content. If
the HTTP/HLS/RTMP content is within the Rev Mesh, the user will get the content and the DME
will cache the content. The requesting DME uses the peer DME with the fastest response time
as the source for the content.
Starting with DME version 3.7, VBrick has implemented an automated process for removing
older content. When disk storage reaches a predefined threshold, content on the DME is
evaluated and deleted based on a modified LRU (least recently used) algorithm. This algorithm
identifies old content and only removes that content if it exists elsewhere within the Rev Mesh
until the threshold is met.
All DMEs will be included within the Rev Mesh and utilized for content location. As such,
reachability (ability to connect) between the DMEs is a key issue for the Rev Mesh. The Rev
Mesh has limited usefulness if DMEs cannot reach each one another.
When moving into the Rev Mesh, the following guidelines should be considered:
As noted above, when a DME incurs a cache miss and the user has streamed a VOD file from a
member of the Rev Mesh, the original DME will then cache that VOD file locally, so subsequent
users will have access to it directly. This is generally known as first-access caching.
• Communications
• Lectures
• Training Sessions
• Meetings
• Any critical events
Integration
There are two main use cases of for a TCS integrated with Rev: Rev Webcast and Rev Video
On-Demand (VOD).
The Rev Webcast integration allows videoconferences streamed through TCS to be used for
Rev webcasts that can be watched by any authorized Rev user, anywhere on the network, on
any device.
For live webcast streaming to Rev viewers, a recording alias using a “VBrick Live” media server
is configured to send a live stream to a DME. Once configured, the TCS will automatically
start/stop the live stream to the DME whenever that recording alias connects/disconnects from a
teleconference and when the DME receives the stream, all of its standard eCDN features are
available to automatically transmux and transrate the stream as needed. The origin DME then
makes it available to local Rev viewers and forwards it to other DMEs for system-wide live
stream distribution.
The Rev VOD integration automatically submits recordings created by TCS to Rev’s standard
“Add Video” workflow for content approval, transcoding, ingestion, and system-wide distribution
so the content is available for on-demand viewing by any authorized Rev user, anywhere on the
network, on any device.
To enable the integration, a TCS recording alias is configured with a “VBrick VOD” media server
which will FTP recordings to the DME where a built-in DME feature automatically uploads the
recordings to Rev’s API.
Requirements
TCS Integration Requires:
Rev 7.5+
DME 3.5.1+
TCS 6.2.1+
Key Concepts
Video on Demand (VOD)
Video- on- Demand, commonly known as VOD or sometimes “YouTube for the enterprise”
enables viewers to playback previously recorded video on a variety of devices. There are
several key components to enabling this video playback. These include physical transcoding
and storage of the video;, browsing/access of the video via a search or portal interface;, the
video player; which plays back the video and the network delivery of the delivers the physical
video bits across the network from the storage to the video player. The Rev and DME solution
encompasses all of these components to deliver presenting a unified solution for the capture,
upload, search, browse, and playback of stored Video on Demand VOD on the corporate
network.
In Rev, users can access videos either as a named user or as an anonymous viewer (s. See
licensing section for more information).
Live Events
Live Events are a core offering of the Rev and DME solution. These typically include video and
static graphical references, although they can also leverage composited live video and full
motion graphics. Along with displaying the content to participants, a number of vectors for bi-
directional communication are included such as polls, chat, and moderated Q&A.
In Rev, users can access videos either as a named user or as an anonymous viewer. See
licensing section for more information.
External Streaming
External streaming of live and recorded video assets is a key use of the VBrick solution. There
are two key components to both live and recorded external streaming (beyond licensing
components, which are covered in a later section): network-level access to the portal, and
network delivery of the video asset in question.
Access to the Rev portal on a network level is required for streaming outside the firewall. For
customers using the Rev Cloud solution, this is of course inherent to the product; portal access
is available via HTTPS to authenticated and, if configured, unauthenticated users. On premise
Rev customers need to configure a reverse proxy to allow external access to the portal, or
deploy the portal in an appropriate DMZ.
Delivery of the video asset is a separate question beyond portal access. For Rev Cloud
customers, the solution includes an integration with Akamai for VOD and live content delivery.
This usage is calculated against a client’s standard bandwidth and storage allocation. On-
premise customers can integrate with their own Akamai accounts, or can deploy VBrick DMEs
in DMZs to provide external streaming access.
MP4
FLV
F4V
MKV
Rev’s Akamai integration for live streaming supports RTMP for live stream ingestion. (See the
DME section for live streaming protocol support on the DME).
Deployment Models
Cloud Only
For customers who wish to deliver video over the public internet to remote users, and small- to
medium- sized offices with reasonably sized available bandwidth, a cloud- only deployment is a
compelling option. In a cloud- only deployment, a customer purchases Rev Cloud users licenses
or Public Webcasting hours and leverages uses the included Akamai CDN for live and on-
demand streaming to all users.
In a cloud- only deployment, users access the Rev Cloud service over the public internet using
HTTPS over port 443. Video streaming is securely relayed from Akamai (which is integrated
with Rev for authorization and reporting) over HTTP ports 80 and 443. This defaults to HTTPS
HLS streaming over port 443. The only on- premise component of a cloud- only design is the
optional LDAP connector. (See further section for additional information, but this application sits
on a customer provided Windows 2012 Virtual Machine and is used to enable directory
synchronization). Rev Cloud can leverage SAML 2.0 enabling Sing SignOn workflows for
authorization. Internet users access the portal and video streaming via the same mechanisms.
A cloud- only architecture is a great starting point for clients of all sizes. It allows users to get
started extremely quickly, taking advantage of the cloud’s economies of scale of the cloud for
CPU and storage- intensive tasks, such as transcoding and video management. The native
integration with Akamai for both live and on- demand streaming allows robust video delivery
worldwide and adaptive streaming technologies ensure those that users on lower quality
connections can still watch the video content.
It is important to note that in a cloud- only deployment, the bandwidth- optimizing features of the
DME eCDN enterprise Content Delivery Network formed by the VBrick Distributed Media
Engine are not applicable. However, a customer can easily start with a cloud- only deployment
and add VBrick Rev DMEs to it later as bandwidth needs change.
It is also important to note that Cisco TCS integration requires at least one active DME to serve
as the ingestion point for live and on- demand content. This would be the smallest possible
Cloud Hybrid deployment (see the next section for more information). Cisco Spark and Cisco
WebEx integration are fully supported in cloud- only deployments. For Cloud-only live events,
an AkamaiHD compatible video source, such as a VBrick DME and Cisco TCS pair or
appropriate software or hardware encoder is required.
The final note with on cloud- only deployments is that while Rev Cloud inherently includes
generous allocations of storage and bandwidth, cloud- only customers may need to purchase
additional allocations depending on usage. (See licensing section for more information).
Cloud Hybrid
For customers who wish to take advantage of the scalability of the cloud for centralized tasks -
such as video management, transcoding, security, reporting, etc. - yet still need to optimize
bandwidth usage at medium and large offices, a cloud hybrid design is an the ideal choice. In
this case, a customer purchases named user and/or public webcast access licenses for the Rev
Cloud service, as well as one or more Distributed Media Engines to operate behind the firewall.
As in a Cloud-only deployment, the Rev Cloud components include Akamai integration with
included storage and bandwidth for streaming to users outside the corporate firewall. Unlike a
cloud-only deployment, however, Rev’s integrated zoning logic allows users at a site with a
Sample Architecture
From an architectural perspective, a Cloud Hybrid deployment layers the additional complexity
and benefit of the Distribute Media Engines on top of a cloud deployment the additional
complexity and benefit of the Distribute Media Engines. These DMEs are deployed as virtual
machines (optionally on Cisco UCS hardware) at key customer locations - such as datacenters
or offices. The DMEs ‘“phone home” by initiating outbound port 443 connections back to the
cloud, enabling tight integration between the cloud and the eCDN. Video on Demand content
can be pre- positioned to DMEs and live content can be reflected between DMEs. Rev’s
integrated zoning capabilities allow users to automatically receive the best available stream for
their network location and device type. For example, wired network users in a main office might
As with cloud- only deployments, cloud hybrid deployments can optionally integrate with Active
Directory and Single Sign- On via SAML 2.0. With an on- premise DME, Cisco TCS integration
is fully supported.
Recommendations
Cloud Hybrid deployments are what allow a customer to truly scale their video usage. All of the
ease of use and distribution benefits of a cloud- only infrastructure exist, with the added benefit
of eCDN integration.
Specific recommendations for cloud hybrid architectures generally depend on the scale
required. Customers evaluating enterprise- wide deployments will want to closely follow DME
sizing guidelines provided in later sections, while customers starting to scale just beyond a
cloud-only deployment may only have a single DME located in their primary datacenter or main
office. This DME could be used only as a TCS ingestion point or also as a video storage and
reflection device.
The key take away of a cloud hybrid deployment is that it can grow as a customer’s needs grow
as well. Customers can start out with a small number of Cloud users and a single DME and
scale in lockstep with their organization’s needs.
On Premise
While cloud architectures have many inherent advantages, some customers may still opt for
fully on- premise deployments of both the Rev management platform and the DME caching
platform. In this case, a customers would deploy the entire application stack on their own
(virtual) hardware behind the firewall.
In fully on- premise installations, the Rev platform is typically deployed in a customer datacenter
on top of virtual or Cisco UCS hardware. This takes the place of the cloud application stack and
users at customer sites will access the Rev platform over HTTP(s) on ports 80 or 443. (Unlike
Rev Cloud, which is generally deployed on 443 only, on-premise prem customers can chose to
implement Rev on either 80 or 443).
As in a cloud hybrid deployment, DMEs are deployed at key locations. However, in a fully on-
premise deployment, the ‘phone-home’ functionality is used to communicate back to the specific
customer’s Rev deployment in the central datacenter. HTTP(s) communication is still used (as
defined by the customer).
In addition to DME sizing considerations, a customer must size the Rev Runtime cluster to
match the expected concurrency from a user perspective. As such, it is more difficult to grow
organically grow a Rev and DME deployment organically in an exclusively on- premise
environment. Customers can still add DMEs quite easily (as in a cloud hybrid deployment), and
adding capacity to a Rev cluster post- deployment is possible al, though it does have some
dependencies (see installation and maintenance guides for more information).
Unlink the Rev Cloud options, no external CDN integration is included in the purchase price. As
such, customers who want to offer external streaming must both make the Rev portal available
to external networks as well as provide external video streaming. This is typically accomplished
with a reverse proxy for Rev and a dedicated DME in the DMZ for external video streaming.
(See the external access section for more information).
Customers must also store their entire video library as part of the on premise Rev cluster. This
is typically provided by network level storage such as a SAN or NAS, mounted via SMB.
When connected to the external Akamai CDN Akamai through Rev’s native integration (included
with all cloud subscriptions), the delivery of this video is similarly elastic, as it is carried over
Akamai’s private bandwidth-optimized network and seamlessly delivered from hundreds of
points of presence around the world. The Akamai network also, while simultaneously supporting
millions of concurrent connections.
A customer/partner only needs to specify the number of named users, and VBrick will take care
of the rest.
For small deployments, the first choice is between a highly available and non-highly available
system (see the ‘High Availability’ section for more information regarding this choice). Other
factors include:
Server Specifications
Server Specifications
In either case, the Rev Runtime nodes need to mount a NFS-compatible network drive letter to
serve as the master video repository. To allow for multiple transcoded copies of a given video
file, VBrick recommends 3gb of drive space, on average, per each hour of expected video- on-
demand content. For non-redundant deployments, the Rev Runtime can directly host this
repository directly.
For larger deployments, please use the VBrick-provided sizing calculator to determine the
number and type of virtual machines required. That being said, the following table illustrates
some typical break points. Note that each individual VM is per the specifications in the above
table. Enterprise grade storage (SSD or SAS+RAID) and dedicated server CPU cores are
required.
Expected
Rev VMs MongoDB VMs Elastic VMs
Concurrent Users
Up to 10,000 2 2 2
10,000-15,000 3 2 2
15,000-25,000 5 2 3
While ROM analysis should not be used to calculate final production designs, it is helpful when
determining scope of potential work. Many customers can easily provide information similar to
the following:
ROM analysis simply leverages the user counts in these locations combined with DME
specifications. While in some cases this can result in an accurate BOM, in others this does not
capture the full picture.
When precise scoping is required, the most helpful resource from a customer is either a network
diagram, and / or a list of sites with users, available bandwidth (link speed minus utilization) and
connection information. With this information, a detailed network level analysis can be
performed and results will be much more favorable.
An actual computation of available and required bandwidth in transit to each site is key to
network level DME analysis, as opposed to simply looking at the number of users at each site. A
good rule of thumb is to use 1mbit of bandwidth for every user at a site, or, if the number of
concurrent users is known, use 2mbit of bandwidth for every expected concurrent user. Most
enterprise video streams range from 1-2mbit and can provide a reasonably high quality 720p
experience.
In a network-level analysis, a customer may provide a network diagram similar to the following:
Bandwidth
Bandwidth
Location Users Required (Users *
Available
1Mbps)
Corporate HQ 3000 1 Gbps 3 Gbps
However, for the offices in Paris, TX, and the Midwest, our available bandwidth is actually more
than the required bandwidth. This allows us potentially to place DMEs upstream from these
offices. For the Paris office, the 200Mbit link allows us potentially to service the 75 users from
the Medium DME currently proposed for the London office. Rather than placing small DMEs in
each of the TX office and the Midwest office, we can instead place a single Medium DME in the
TX datacenter and serve both of these smaller offices from the datacenter DME directly. Thus,
our final BOM for this customer looks like:
It is important to note that this analysis is applicable regardless of the deployment model for the
Rev Management platform. Rev On-Premise and Rev Cloud will have exactly the same DME
topology in this example to support streaming to these 5800 users. It is also important to note
While some customers operate a more traditional hub & spoke style network with fixed and
specific interconnections between offices and datacenters, other customers operate an MPLS
cloud- style topology, wherein most offices and datacenters connected to an MPLS cloud that is
operated by a third- party provider. An example customer topology may look like the following:
The analysis of this network is similar to a hub- and- spoke topology. We are generally
comparing available WAN bandwidth at remote sites compared to required bandwidth. As in the
prior example, we will use assumptions of 1mbit per stream, 0% pipe utilization and 100% user
participation. This is a good starting point, but if assumptions that are more accurate are
available, they should be used.
Bandwidth
Bandwidth
Location Users Required (Users *
Available
1Mbps)
Office A 75 100 Mbps 75 Mbps
As one can see in the table above, based on these assumptions, Offices A, B, and F have more
bandwidth available than is required and we can host DMEs upstream from them. Offices C, D,
and G have more bandwidth required than is available, so they will need a local DME. For
A/B/F, Datacenter 1 is the logical point for a DME as this can support both office F via the site-
to-site VPN as well as offices A & B via the MPLS cloud. This would also be a useful distribution
node for any required video backhaul or integrations. This analysis brings our final BOM to:
Office G 50 1x Small
The first two examples have focused on exclusively private network delivery. While both of
these examples are equally applicable in both an on- premise Rev and a Rev Cloud
environment, a Rev Cloud deployment offers an additional option where delivery via Akamai’s
external CDN is possible. The below example below illustrates a dual- homed, hub-and-spoke
topology wherein a customer has private connections back to a datacenter and public internet
connections at each office. It is important to note that while this example displays a dual-
connection scenario, this analysis is equally valid in a single- link environment.
Internet Bandwidth
WAN Bandwidth
Location Users Bandwidth Required (Users
Available
Available * 1Mbps)
Los Angeles 2000 100 Mbps 250 Mbps 2000 Mbps
As the above analysis shows, Los Angeles, Dallas and Boston will clearly need DMEs as both
their Internet and WAN bandwidth is less than their required bandwidth to support all users.
Looking only at WAN bandwidth, Atlanta and Virginia would also require DMEs; however, if the
customer is a Rev Cloud customer and is open to delivering video via a public CDN, then both
Atlanta and Virginia are candidates for cloud delivery without a DME. Our final BOM, therefore,
looks like this:
IP multicast over WAN and LAN links can greatly reduce the network footprint required to serve
a given number of users (see multicast section for more information). From a sizing perspective,
a single DME can serve an extremely large number of multicast users, subject only to network
limitations of join requests. While a small DME is theoretically equally capable of originating
multicast streams, it is generally a best practice to deploy large DMEs for multi-site multicast
deployments.
In this example, without multicast enabled, we would be forced to deploy DMEs at every remote
site except for Las Vegas, including multiple large DMEs in Seattle and New York. Multicast
greatly of course reduces that footprint greatly. Rather than focus on specific capacities of WAN
links, as we have in prior examples, it is more important in this case to establish the type of
delivery in the first pass here.
After we have established the method of delivery, we can look at specific network capacity and
determine where DMEs are required. Portland is similar to prior examples; we do not have the
WAN bandwidth to support local users, so a local Small DME is required. Las Vegas has
sufficient WAN bandwidth, so we know we can locate a DME in the CA datacenter to serve Las
Vegas users. Seattle has multicast enabled on the local LAN, so a small DME here can serve all
2500 users - although without redundancy. Finally, NYC, Richmond and Miami can all be served
via our multicast- enabled MPLS cloud, of which includes the CA datacenter. It therefore makes
sense to originate the WAN multicast in from the CA datacenter and also serve Las Vegas
users via Unicast. Final BOM:
An important consideration for both LAN and WAN multicast, which is not shown in the example
above, is the impact of Wireless / Wi-Fi connections. While state- of- the- art wireless access
points that can replicate multicast to Wi-Fi laptops and desktops do exist in the marketplace,
these are not ubiquitously deployed, even in enterprises that fully embrace multicast.
Video-on-demand sizing is typically a secondary concern. When clients have mixed VOD and
live use cases, a distribution set up capable of handling live video loads is usually capable of
distributing VOD to a similar sized user population. However, if a high level of VOD concurrency
is expected and/or a client has very little bandwidth available to a comparatively high population,
then a number of VOD sizing requirements should be considered.
The first decision point related to VOD sizing is whether content should be proactively pre-
positioned to a given site. The DME’s VOD caching capabilities allow a local unit to either
proactively cache VOD files, which are uploaded to Rev, or to reactively cache VOD files upon
their first access. Pre-positioning has a larger up-front bandwidth requirement, whereas reactive
caching has a higher bandwidth requirement at the time of the first access. A general best
practice is to pre-position VOD files to DMEs in datacenters and regional hubs, while using
reactive caching at smaller, remote sites. However, there are exceptions to this best practice as
noted in following sections.
Imagine a manufacturing focused customer with two major offices staffed with knowledge
workers, as well as a number of manufacturing facilities nationwide. This customer wants to use
Rev and DMEs to distribute training content both to their knowledge workers in their main
corporate offices. The customer has provided the following network diagram:
Type of VOD
Location DME
Delivery
CA Datacenter 1x Medium Pre-position
VBrick supports interoperability with WAN- optimization technologies that support dynamic
caching of HTTP(s) served video, such as HLS and HDS. This includes, but is not limited to,
Cisco WAAS and Akamai Connect (Cisco iWAN). In a supported configuration, WAN
optimization technologies can perform first- byte caching of an HLS live or on- demand video
stream. With these technologies, the first user requests content (e.g., a HLS stream) which is
proxied by the WAN optimization technology. The content is then fetched, cached and provided
to the fist viewer. The content can originate from Rev or DME, within the VBrick environment.
The next and subsequent users to request the stream are then served a cached copy from the
WAN optimization device. This scheme supports subsequent chunks of the same stream and/or
other streams for all the viewers.
Interoperability testing with VBrick Rev, Rev DME and Cisco WAAS shows that Cisco WAAS
with Akamai Connect can successfully cache videos from a VBrick DME located at a central
location, and serve this live or on-demand content to additional users at the remote location.
While both DME and WAAS have the ability to cache and serve live and on-demand video, each
has core strengths that can be leveraged in different parts of the network for maximum effect.
The VBrick DME’s ability to natively integrate natively with the VBrick Rev video portal - ,
especially in a Cloud Hybrid deployment model - allows all videos to be pre-positioned ahead of
time to one or multiple locations throughout the network. The combination of the Rev video
portal running in the cloud, and VBrick DME running on- premise makes this Cloud/Hybrid
architecture possible. The DME also serves as the central integration point for acquiring live and
on-demand video from Cisco Telepresence Content Server (TCS), allowing seamless
interoperability with Cisco’s wide range of TelePresence endpoints.
Cisco WAAS provides a broad range of acceleration technologies to speed up email, file, web,
software-as-a-service (SaaS), video, and VDI applications. This broad range of acceleration
technologies facilitates reduced bandwidth consumption.
The strengths of the VBrick DME make it a natural fit for large campus and datacenter locations,
providing the Cloud/Hybrid deployment capabilities and integration into Cisco Telepresence
infrastructure components. The acceleration technologies provided by Cisco WAAS (including,
but not limited to, video caching) is best leveraged at bandwidth-constrained locations, such as
branch offices, retail locations, and other sites where video caching is not the only driver.
From a solution sizing perspective, there are therefore two factors to consider:
Regarding the first item, WAN optimization technologies generally support HLS adaptive
streaming for live and on demand, as described above. The DME’s more video- centric features
- such as transmuxing, transrating, bit-level streaming protocol support and multicast
capabilities - are generally not present in generic WAN optimization technologies and, in certain
cases, this alone can drive a decision. Regarding scale, users should consult their WAN
Bandwidth
Bandwidth
Location Users Required (Users *
Available
1Mbps)
San Jose 1500 50 Mbps 1500 Mbps
As such, we will need to take advantage of the existing iWAN deployment or deploy a new DME
at all of the locations. Chicago and Charlotte have a small enough user count that the existing
iWAN device will be sufficient for video streaming. Austin has a small enough user count that it
could leverage the iWAN device as well; however, if the customer wishes wants to take
advantage of the LAN multicasting capabilities in the Austin office, they will need a local DME
there to do so. San Jose and Baltimore are large enough sites that they will require their own
DME. Finally, a central DME in the datacenter is required to serve as the streaming origin for
the iWAN sites. The final BOM looks like:
Video Storage
There are two components to consider when sizing a Rev and DME solution for video- on-
demand storage. Rev is the authoritative video library and containing all videos uploaded to the
system. As as such, it must contain enough storage for all of the content. DMEs may contain all,
some or no part of the video library. Thanks to the Rev Mesh topology used by the DMEs,
storage sizing is less important at the edge of the network. We still recommend the customer
have one or several core DMEs with storage space sufficient for a large portion of the library.
From a Rev perspective, two variables come into play: the cumulative length of videos uploaded
to the platform and the bitrates of the transcoding profiles. By default, Rev ships with a single
adaptive HLS profile that contains the following sub resolutions:
As such, a single video is encoded at an effective storage rate of 3928 kbps. On an hourly
basis, this works out to 1.68GB per hour of stored video. Default 720p fixed bitrate transcode is
at ~1GB/hour. Default 1080p fixed bitrate transcode is at ~2GB/hour.
At core DMEs and at Rev, we recommend that customers have between 2GB and 3GB of
storage per hour of video.
Network Requirements
Device Communication
The VBrick environment uses firewall friendly HTTP(s) and WebSocket protocols to
communicate between the Rev server cluster (in the cloud or on- premise) and to devices
located inside the organization or enterprise firewall. In a cloud scenario, where the Rev video
NOTE: Rev never initiates connections with devices or LDAP connectors. All connections are
outbound from the VBrick devices, such as an encoder or LDAP/AD connector, to initiate the
connection.
For further details, please see the Logical Connections diagram in this document’s Addendum.
Rev provides device control and an integrated management system to make webcasting and
video management possible. The device must contact Rev to initiate the conversation between
devices. This ensures any devices communicate from behind the firewall to maximize security.
To accomplish this, a security key and the device’s unique MAC address are used to initiate the
device communication with Rev.
2. On the VBrick device, provide the API key and the fully qualified domain name (FQDN)
or IP address of the Rev account.
3. In Rev, add the device via MAC address and the device will communicate immediately
with Rev.
A decentralized proxy infrastructure such that major offices/regions have separate but
unique public IP addresses (for example, the Boston office has IP 1.2.3.4 and the New
York office has IP 5.6.7.8, thus zoning trees can be built on public IPs rather than private
ones).
Use of the DME-based location service. This requires one or more VBrick DMEs to be
centrally located, accessible by all users, and installed with a valid SSL certificate. With
the DME location server, there are no requirements on the proxy infrastructure other
than the communication requirements outlined above
Device OS Browsers
Windows 10 IE9-IE11
Windows 8.1 Firefox 27+
PC
Windows 8 Chrome 33+
Windows 7 Edge (Windows 10 Only)
Safari 7+
V10.10 (Yosemite)
Mac Firefox 27+
V10.9 (Mavericks)
Chrome 33+
iPhone, iPad iOS 8.0+ Native Browser
Guaranteed bandwidth (CBWFQ) requirements depend on the encoding format and rate
of the video stream(s) as required by the solution deployment.
In the access layer of the data center switching network, consider upgrading targeted
server cluster ports to 10 Gigabit Ethernet (10GE). This provides sufficient speed and
low-latency for storage and retrieval needed for streaming intensive applications.
(<60ms) although having multicast enabled on all network segments is recommended, it
is not absolutely required.
Packet Loss must be no more than .05% for optimal quality of experience.
Latency should be no more than 4–5 seconds (depending on use case video application
buffering capabilities, for instance HLS video has a much higher buffering requirement
due to the technology).
The QoS Baseline recommendation for broadcast video packet marking is PHB- CS5,
DSCP- 40. (whether unicast or multicast). The QoS Baseline recommendation for
Multimedia Streaming packet marking is PHB - AF31, DSCP -26. (whether unicast or
multicast)
Edge or Branch routers may not require provisioning for VBrick video traffic on their
WAN/VPN edges (in the direction of Branch-to-Campus).
Non-organizational video content (video outside the VBrick environment or video that is
strictly entertainment-oriented in nature or non-organizational such as personal movies,
and so on) may be marked as Scavenger (DSCP CS1) and assigned a minimal
bandwidth (CBWFQ) percentage.
Consider current bandwidth utilization and add forecasts for media applications, especially for
video-oriented media applications like VBrick’s and video conferencing applications etc.
Because video is in a relatively early stage of adoption, use aggressive estimates of possible
bandwidth consumption. Consider bandwidth of different entry and transit points in the network.
What bandwidth is required at network access ports both in the campus as well as branch
offices? What are the likely media streams needing transport across the WAN?
It is important to consider all types of media applications. For example, how many streaming
video connections are necessary for training and communications? If the CEO requests an all-
hands webcast event, how many concurrent users will you need to be supported? These
typically flow from a central point, such as the data center, outward to employees in campus and
branch offices. As another example, how many IP video surveillance cameras will exist on the
network?
This network video traffic flows are typically from many sources at the edges of the network
inward toward central monitoring and storage locations. Map out the media applications in use,
considering both managed and un-managed applications. Understand the bandwidth required
by each stream and endpoint, as well as the direction(s) in which the streams will flow.
Mapping those onto the network can lead to key bandwidth upgrade decisions at critical places
in the network architecture, including campus switching as well as the WAN.
Burst is another critical bandwidth-related concern. Most individuals think of bandwidth in terms
of bits per second (i.e., how much traffic is sent over a one second interval); however, when
provisioning bandwidth, burst must also be taken into account. Burst is defined as the amount of
traffic (generally measured in Bytes) transmitted per millisecond that exceeds the per-second
average. Consider an IPTV HD broadcast stream which could consume as much bandwidth as
15 Megabits per second, equating to an average per millisecond rate of 1,875 Bytes (15 Mbps ÷
1,000 milliseconds ÷ 8 bits per Byte). This IPTV stream operates at 30 frames per second,
Therefore, all switch and router interfaces in the path must have adequate burst tolerance.. IP
video in networking terms is known for being very ‘bursty’ so when designing for video adequate
overhead of bandwidth must be allowed for in the bandwidth strategy when designing for video.
Packet Loss
Successfully delivering network video application traffic, reliably and at the service levels
required by each application, is mission- critical in today’s business environment. This is
especially true for IPTV broadcast or live streaming video. For instance, consider the loss
sensitivities of VoIP compared to high-definition media applications, such as HD video.
For a voice call, a packet loss percentage of even 1% can be effectively concealed by VoIP
codecs; whereas, the loss of two consecutive VoIP packets will cause an audible “click” or “pop”
to be heard by the receiver.
In stark contrast, however, video-oriented media applications generally have a much greater
sensitivity to packet loss, especially HD video applications, as these utilize highly-efficient
compression techniques, such as H.264. As a result, a tremendous amount of visual information
is represented by a relatively few packets, which if lost, immediately become visually apparent
in the form of screen pixelization.
With HD video applications, such as what VBrick’s, end users can notice a loss of even one
packet in 10,000. The packet loss targets for video- ready campus and data center networks in
terms of packet loss is 0.05%; on WAN and branch networks, loss should still be targeted to
0.05%, but convergence targets will be higher depending on topologies, service providers, and
Therefore, ‘packet loss’ is one important delivery tolerances required in the VBrick solution in -
order to deliver a high- quality experience to the end user.
Network latency can be further broken down further into fixed and variable components:
Serialization (fixed)
Propagation (fixed)
Queuing (variable)
Serialization refers to the time it takes to convert a Layer 2 frame into Layer 1 electrical or
optical pulses onto the transmission media. Therefore, serialization delay is fixed and is a
function of the line rate (i.e., the clock speed of the link).
For example, a 45 Mbps DS3 circuit would require 266 µs to serialize a 1500- byte Ethernet
frame onto the wire. At the circuit speeds required for video networks (generally speaking DS3
or higher), serialization delay is not a significant factor in the overall latency budget. The most
significant network factor in meeting the latency targets for video is propagation delay, which
can account for over 95% of the network latency budget. Propagation delay is also a fixed
component and is a function of the physical distance that the signals have to travel between the
originating endpoint and the receiving endpoint. The gating factor for propagation delay is the
speed of light: 300,000 km/s or 186,000 miles per second. Roughly speaking, the speed of light
in an optical fiber is about one-sixth the speed of light in a vacuum. Thus, the propagation delay
works out to be approximately 4-6 µs per km (or 6.4-9.6 µs per mile).
Nonetheless, it should be noted that overall quality does not significantly degrade for either
voice or video.
The final network latency component to be considered is queuing delay, which is variable.
Variance in network latency is also known as jitter. If the average network latency in a network
is 100 ms, for example, and packets are arriving between 95 ms and 105 ms, then the peak-to-
peak jitter is defined as 10 ms.. The primary cause of jitter is queuing delay, which is a function
of whether a network node is congested or not, and if it is, what scheduling policies (if any) have
been configured to manage congestion.
For interactive and streaming media applications, packets that are excessively late (due to
network jitter) are no better than packets that have been lost. Media endpoints usually have a
limited amount of playout-buffering capacity to offset jitter. However, in general, it is
recommended that jitter for real- time interactive media and streaming media applications not
exceed 10 ms peak-to-peak. Since the majority of factors contributing to the latency budget are
fixed, careful attention has to be given to queuing delay, as this is the only latency/jitter factor
that is directly under the network administrator’s control.
Unlike video conferencing, streaming video applications such as VBrick’s have more lenient
QoS requirements because they are delay-insensitive (the video can take several seconds to
cue-up and the end audience will typically not know the difference) and are largely jitter-
insensitive (due to application buffering). However, streaming video may contain valuable
content, such as IPTV, e-learning applications or multicast company-wide meetings, and
therefore may require service guarantees in traversing the network that are somewhat different
than other video applications. Even though latency of the overall stream is not a huge issue,
actual video packets are very time-sensitive. Even 1% packet loss, or having those packets
delivered out of order, greatly affects the end user’s quality of experience by the end user. QoS
network settings are the primary tool currently used to ensure that bandwidth is used as
efficiently as possible and the VBrick video experience is the best that is possible.
VoIP Telephony EF 46
Multimedia AF41 34
Conferencing
Real-Time Interactive CS4 32
Scavenger CS1 8
aqmp:4369/tcp
Rev RabbitMQ aqmp:5762/tcp Internal clustering methodology
aqmp:25672/tcp
RTP Multicast
With these industry changes, Flash video is the only method of delivering IP multicast to many
browsers such as Google Chrome and is indeed a compelling option for near-plugin-less
delivery to all major browsers.
Player Support
As discussed in the video players section, RTMFP is played via the Flash video player on all
compatible devices, where RTP Multicast is delivered via the VBrick player on Windows
browsers and Apple QuickTime on Mac browsers. HTML5 currently does not support direct
ingestion of multicast without a Java Applet or other proprietary plugin to act as an intermediate
layer. It is important to note that some competing products do claim a ‘HTML5 Multicast’
solution; however, these are all actually leveraging an intermediate layer, such as a Java
Applet.
Unicast Fallback
As of the spring 2016 release, Rev supports native fallback from RTMFP multicast to any other
unicast stream within the zone. This functionally works as follows:
A user on a multicast capable device (MAC/PC) joins a webcast with the flash player
available. They are presented with the flash player and an RTMFP video stream.
The flash player attempts to play the video stream. If an “empty buffer” condition is
detected after 10 seconds, the user’s web browser automatically de-loads the Flash
player and migrates to the stream with the next highest priority.
This fallback is inclusive of the HTML5 player and will be performed seamlessly.
IP multicast delivers application source traffic to multiple receivers without burdening the source
or the receivers while using a minimum of network bandwidth. Multicast packets are replicated
in the network at the point where paths diverge by Cisco routers enabled with Protocol
Independent Multicast (PIM) and other supporting multicast protocols, resulting in the most
efficient delivery of data to multiple receivers.
Many alternatives to IP multicast require that the source send more than one copy of the data.
Some, such as application-level multicast, require the source to send an individual copy to each
receiver. Even low-bandwidth applications can benefit from using Cisco IP multicast when there
are thousands of receivers. High-bandwidth applications, such as MPEG video, may require a
large portion of the available network bandwidth for a single stream. In these applications, IP
multicast is the only way to send to more than one receiver simultaneously. Figure 1 shows how
IP multicast is used to deliver data from one source to many interested recipients.
For more in depth information on IP Multicast, its concepts and configuration, see the following
links on cisco.com
http://www.cisco.com/c/en/us/td/docs/ios/solutions_docs/ip_multicast/White_papers/mcst_ovr.ht
ml#wp1015614
http://www.cisco.com/c/en/us/tech/ip/ip-multicast/tech-configuration-examples-list.html
Streaming video starts with a source, such as a VBrick encoder or Cisco TelePresence Content Server,
which first encodes live video into an IP format to be delivered across the IP network. Often, the first
device to encode the video will produce a unicast stream, and rely on a downstream device to take that
unicast stream and convert it to a multicast stream and multicast capable protocol.
The underlying IP network must be capable of supporting multicast distribution. The network must be
capable of transmitting the multicast traffic from the point of origination to the client device that will
decode and view the video. If multicast routing is not possible from the point of origination to the client
device, alternative methods of delivery must be used (i.e. using a unicast CDN, or using unicast to
traverse non-multicast capable networks with another multicast origination point closer to the client
device).
Not all streaming video protocols are capable of delivering video via multicast. While there are a wide
range of streaming video protocols, only a select few are multicast capable. We will focus on the three
major multicast capable protocols below:
Windows Media Video (WMV) – WMV is a Microsoft streaming video protocol that is
capable of using multicast to deliver video inside the eEnterprise network. This protocol
was popular in the past for its compatibility with Microsoft client devices and video
players (Windows Media Plug-in, Silverlight) along with its multicast capabilities.
However, Microsoft has discontinued development of this technology, and it is no longer
supported in the latest Microsoft Server products. As such, it should be viewed as a
Real Time Media Flow Protocol (RTMFP) – Adobe developed this protocol alongside
traditional Flash video streaming (RTMP) to allow transmission of live streaming video in
multicast. The advantages of this protocol include being compatible with commonly
deployed Adobe Flash players (not needing another specialized video player installed on
the client device). This Flash- based, multicast- capable streaming protocol will be the
preferred mechanism for leveraging multicast to deliver live streaming video inside the
Enterprise.
Video-on-demand streaming
Multicast can help in the distribution of streaming video, but the underlying assumption of
multicast is that there is a single source that will communicate with a large number of remote
devices at the same time. This model provides an excellent fit for streaming live video sources
to large numbers of viewers. However, video on-demand (VoD) streaming of recorded content
does not fit this model.
VoD viewing necessarily assumes that different viewers will request different content at different
times. Because of this wide range of source content and time based requests, multicast is not
appropriate for delivering this content. Each viewer will receive their requested recorded content
via Unicast.
While advanced enterprise wireless network deployments (such as those based on Cisco
wireless controllers and access points) can support IP multicast distribution, it is often the case
that support for multicast has not been enabled in all wireless network environments.
In this scenario, you must provide for clients on these non-multicast capable network segments
to access the live streaming video via unicast (see discussion of multicast to unicast failover
below, as well as VBrick Rev zoning concepts to exclude these wireless networks from
attempting to join a multicast stream).
Many Wide Area Network (WAN) environments do not support multicast across the WAN. In
some cases, these limitations can be overcome by tunneling multicast traffic across non-
multicast capable network connections. In other situations, it is preferable to deploy unicast-
based eCDN components to the remote sites to allow for unicast streaming of live video, as the
eCDN deployment can also help overcome other limitations, listed here in addition to the lack of
WAN multicast support.
All of the most popular mobile devices on the market today (iOS, Android, etc.) do NOT support
multicast in any fashion. This limitation of the mobile device means that provisions must be
made to allow mobile devices to access the live streaming content in unicast, and is another
reason to ensure that unicast- based eCDN functionality is deployed in addition to, or instead of,
multicast based video streaming.
Public – Assets or events marked public are excluded from user- based authentication
requirements. For video- on -demand assets, a public designation will allow the video
asset to be embedded in external web pages without authentication (such as on a
corporate web site), as well as provide a page containing the video which can be shared
via the standard sharing tab. If the Guest Video Portal (see below) is enabled, then
Public VOD assets will appear in the portal. For live events marked public, event hosts
can select a shared password for all participants to use, or allow anyone to join without a
password. In both cases, guest users will be required to enter an appropriate Display
Name, and a syntactically correct, though not validated, e-mail address.
All Users – The All Users designation used for both VOD and Live Events restricts the
asset/event to only authenticated users in the system. Users can be authenticated via
any of the options presented below, but must have a login to Rev in order to access the
asset or event. Furthermore, users cannot be filtered from the set that can authenticate;
if a user has a valid Rev login, they can access this asset.
Private – The private designation is designed to further limits access to the asset or
event. When this option is selected, the asset owner or event host will be presented with
a multi-select search box to select which users, groups, or teams have access to the
event or asset. This multi-select search box allows the host/admin to select any
combination of local, LDAP, or SSO users or groups along with local teams (see below
for more information).
Rev both includes its own internal user repository and can synchronize users and groups with
Active Directory via LDAP. LDAP- created users can additionally authenticate through LDAP,
and all users, regardless of source, can be configured for SSO authentication through SAML
2.0. The below table below summarizes:
LDAP User X X
Rev’s built in teams capability exists over top of both local and LDAP groups and users. Teams
can contain a combination of local and LDAP users and groups and can be used for asset/event
permissioning alongside them.
The Active Directory LDAP Connector is not required if Rev is deployed on-premise and
connected to an on-premise Active Directory server.
The Active Directory LDAP Connector server’s purpose is to provide a synchronization between
the client’s Active Directory server that is typically behind the firewall and instances of Rev in the
VBrick cloud. The connector application pulls specific limited authentication group and user
information from the LDAP server and then pushes it to Rev. Authentication information from
Rev is then passed down to the AD connector, which allows AD to issue a ‘success’ or ‘failure’
command back to Rev. In this way, there is no import of LDAP/Active Directory passwords or
other sensitive information into Rev.
Access to the LDAP server on user configurable ports, typically 389 or 636 for LDAP and
LDAPS respectively.
4 vCPU, 8gb RAM, and 250gb storage is the recommended configuration for most
deployments.
For VOD content, the administrator has the option of provisioning a guest portal. By default, this
URL will be the URL of the Rev tenant, followed by /#/guest. This portal contains a per-category
listing of all videos within the system that are marked ‘public.’ Any user who can functionally
access this URL can browse the available categories and videos, select a video and play it
back. These unauthenticated users can share the video via e-mail or link with other users but
cannot comment or rate the video. Public playbacks will be included in the administrative stats,
but without a username attached to the play. Authenticated users can optionally sign in at the
guest portal to access more content and functionality. A sample guest portal:
Named user licenses are assigned to specific users dynamically upon their first login to the
system. This allows an organization to purchase a number of users smaller than their full
employee count, while still importing the full employee directory. As more employees log into
Rev for the first time, they will convert from unlicensed users to licensed users and the available
license count will decrement:
One thousand named user licenses are included with the purchase of a Rev starter pack, with
tiered pricing that includes additional discounts for volume purchases of users. Per whole
increment of 1000 users of Rev Cloud, 250gb of video storage is included (up to 5TB total), and
2500 hours per year of anonymous access and VC recording. An additional PID for 1TB of
storage is available. Rev On- Premise does not include any cloud storage or bandwidth
allocation, but does include 5000 hours per year of anonymous VOD and Webcast access per
1000 users purchased.
For user based customers, cloud and on-premise video delivery to named users is unlimited,
subject to acceptable use policies. Anonymous access, including cloud and on-premise video
delivery is metered against the included hours allocation. User based customers can purchase
additional cloud access hours for additional anonymous streaming usage.
Cloud Access Hours is an additional licensing option for webcasts and VOD to both
authenticated and unauthenticated users on a consumption basis. Available only for Cloud
deployments, the Cloud Access Hour option is sold on a ‘viewer-hour’ basis. The viewer-hour
includes both the software licensing charge for accessing the service or software, as well as
For both user based and hour based cloud customers, cloud VC recording is consumed on an
hours basis against the included or separately purchased hour. Note that 1 hour of VC recording
will consume two viewer hours of usage. (One stream in and one stream out).
All customers can purchase DMEs on a per-instance basis in the small, medium, and large
levels described above.
Encryption
Rev Cloud inherently includes robust encryption at- rest and in- motion (see next section for
more information). Rev On- Premise can be similarly configured to communicate only through
HTTPs. SSL termination can be performed either at the load balancer level or at the Rev
Application server level. Encryption at- rest can be achieved via disk or block level encryption of
the metadata servers, or can be achieved by leveraging an S3-compatible object store with
native encryption.
DME supports block- level encryption at the hypervisor level for data at rest, and as of the DME
3.10 release, supports the easy configuration of HTTPS delivery of HLS content (note that a
valid SSL certificate is required for this configuration).
Cloud Positioning
Advantages
Rev was architected from the lowest level to take advantage of the increased scalability and
security of the cloud. As such, the standard VBrick solution architecture is inherently a hybrid
one with Rev in the Cloud and DME on premise. While fully on- premise and fully cloud options
are additional supported, only the hybrid configuration takes advantage of the cloud for
Rev Cloud is an inherently elastic service. Whereas on- premise installations require, and are
constrained by, a fixed amount of hardware resources for a given load and are thus constrained
by the amount provisioned, Rev Cloud can dynamically adjust and reallocate resources as
needed. As such, a customer may want to host a 1000- person webcast on one day, and a
10,000 person webcast the next; with Rev Cloud, no configuration changes are needed.
Rev Cloud additional also provides a number of management benefits. Most Rev Cloud orders
are provisioned within 24 hours, and Rev Cloud includes upgrades to the latest release
automatically. This frees up client administrators to focus on more pressing tasks.
Rev Cloud is updated on an 8-week delivery cycle so features are delivered to end users much
more quickly and without IT staff being required to perform on-premise upgrades. Rev Cloud
also exclusively offers access to the Cloud VC recording features.
Finally, Rev Cloud includes a number of inherent technical benefits of CDN integration and
security outlined below.
CDN Integration
Rev Cloud includes a native integration to Akamai for delivery of live and on- demand content.
Rev Cloud customers existing bandwidth allocation includes delivery of content via Akamai, if
configured. From a live perspective, upon request, VBrick will provision Akamai publishing
points for customers to use for live events upon request. This way, a customer can configure a
presentation profile that will deliver content to internal users via DMEs and content to external
users via Akamai with no user input required.
From an on- demand perspective, Akamai VOD caching is configured by default for all Rev
Cloud customers. For viewers in the default zone, Akamai will serve VOD asset requests as
needed. This is done securely as follows:
Akamai receives the playback request and separately queries Rev to determine if the
playback request is authorized (this functional prevents a malicious user from sniffing the
playback URL and providing it to others).
If Rev authorizes Akamai to provide the playback, Akamai first checks its local cache to
determine if the asset is available, for streaming directly to the user if so.
If the asset is not available in the local cache, Akamai requests it from Rev, caches it
locally and streams it to the user over the private Akamai network. This asset is then
cached for the next request.
Security
Application Security
VBrick’s Cloud Rev platform exclusively leverages HTTPS technology for all user, admin, and
device communication. This provides commercial- grade encryption of meta-data and
administrative content in motion both over private networks and the global internet. Rather than
relying on a redirection to HTTPS for certain higher- risk functions, VBrick’s Cloud Rev servers
simply only operate only over HTTPS/443, providing a seamlessly secure end- user and
administrative experience.
This additionally holds true for device control and directory integration. Devices such as the
VBrick DME communicate with the Rev Cloud in both a firewall friendly and secure manner.
Each device inside a customer’s network, or on the public internet, makes an outbound HTTPS
call to the specified Cloud Rev Cloud DNS name, presenting a customer-configured API key.
API keys can be issued on a per-device basis to allow ease of management and revocation.
The Rev Cloud servers then authenticates the API key and the expected MAC address of the
device providing several layers of authentication. At all times the communication channel is
protected via HTTPS and thus invisible to packet sniffing. This same secure device control
mechanism is used to protect directory integration including Active Directory, LDAP, or SAML
2.0.
For customers who want encryption in- motion for both on-demand and live video streaming,
VBrick fully supports encrypted streaming technologies. At the customer’s option, either certain
locations or an entire deployment can be locked down to provide streaming only using only the
HLS protocol over HTTPS.
HLS is an adaptive live and on- demand streaming protocol that allows a video client to switch
seamlessly switch between lower and higher quality versions as network conditions deteriorate
and improve. As an HTTP-based protocol, it includes the ability to be run over HTTPS for a
secure streaming experience. The VBrick DME, working in close communication with Rev
Cloud, can be configured to only provide only this video delivery option, ensuring that all live
and on- demand streams are over an encrypted channel. By embedding this secure stream in a
secure HTTPS page as described above, we ensure stream hijacking is not available to anyone
observing the transmission on the network. All Akamai delivery of VOD assets to Rev Cloud
customers is similarly encrypted via using HTTPS.
Finally, all video files stored in the Rev Cloud are protected by industry-leading AES256 bit
encryption at rest at all times.
VBrick has adopted the ISO 27001 standard for operating a secure information security
program. Additionally, VBrick is committed to operating under the controls of the FedRAMP
program, which uses controls that are a subset of the NIST SP 800-53 Revision 4 standard.
Not only do these standards have industry-wide recognition and acceptance, but they also
provide an externally verifiable framework for operating our Cloud service and its supporting
services securely.
VBrick operates its Cloud service using best practices for secured production environments.
Some of the steps we take to secure the Cloud service and your data are:
DMZ, application and data layers protected by separate firewalls with deny-all, permit by
exception model of access
Even the best security program is only as good as its execution. VBrick’s Infrastructure and
Operations security is subjected to the following validation in order to assure our customers of
our commitment to security:
Third- party vulnerability scans at least quarterly, and with every major update to the
application or infrastructure.
Internal audit team to regularly assure compliance with VBrick’s stated information
security policies and processes.
Third- party assessment of our compliance with standards such as FedRAMP and ISO
27001. This allows us to provide current and prospective customers an objective
assessment of VBrick’s security program and compliance.
VBrick’s cloud infrastructure provider, Amazon Web Services, is the worldwide leading Cloud
infrastructure provider, and VBrick is able to leverage their capabilities to secure the Rev Cloud
platform.
AWS maintains state- of- the- art security of their datacenter premises and maintains practices
intended to ensure maximum physical security of those premises. They have physical and
environmental security capabilities that meet or exceed the capabilities of other major providers.
AWS has implemented a world-class network infrastructure that is carefully monitored and
managed. This capability includes Distributed Denial of Service (DDoS) monitoring and
protection, encrypted communications, and support for network Security Groups and Access
Control Lists.
The IT infrastructure that AWS provides to VBrick is designed and managed in alignment with
best security practices and compliance programs for a variety of IT security standards such as
ISO 27001, FedRAMP, PCI DSS Level 1, SSAE 16, are in place.
Regardless of the choice of Cloud or On Premise Rev, the top- level part number that needs to
be ordered is R-VBRICK-USER-SP.
When selecting options for this top- level part number, the CCW user will be presented with
several options as seen below
When ordering Cloud User licenses, users should select the Cloud User Tiers option seen
above, then configure the desired number of cloud user license for the given application. Cloud
User Tiers have different price points depending on the number of annual user subscriptions
ordered. The different tiers and corresponding part numbers are listed below (1,000-2,500
users, 5000-10,000 users, 10,000-20,000 users, and 30,000+ users)
Additional Storage
VBrick Rev Cloud User licenses include a Right to Use allotment of storage per 1,000 users. For
each additional 1,000 users, the customer is allotted 250GB of video- on -demand storage in the
cloud. For example, a 5000- user deployment would include 1.25TB of VoD VOD storage.
As described in the licensing section above, Cloud Access Hours represent an alternative
licensing model wherein authenticated and unauthenticated users can access the platform on a
consumption basis. All streaming, both VOD and Live, from both the Cloud and DMEs will
consume a viewer hour.
Cloud Access Hour licenses are sold in increments of 10,000, 50,000, and 100,000 viewer
hours. These licenses expire one year after the purchase date. These Cloud Access Hour
licenses may be purchased together with, or separate from, named user Cloud user licenses.
When purchased with named user Cloud licenses, these hours provide additional anonymous
usage entitlements. The graphic below shows the options available in CCW when selecting the
Rev Cloud Access Hour sub item.
When ordering the on- premise user licenses, select the number of users required for the
application, and order the corresponding quantity in the appropriate tier (5,000-10,000, 10,000-
20,000, or 30,000+). In addition, there is a specific Education user license available to give
access to students in K-12 or Higher Education organizations at a reduced price.
However, at the time of writing of this document, Cisco Commerce Workspace requires the
quantity of on- premise user licenses to match the quantity of user maintenance SKUs ordered.
To get around this requirement, order the first year of maintenance part numbers in the line item
with the user licenses, then enter a separate line item for R-VBRICK-USER-SP containing only
the additional years of maintenance part numbers.
When ordering VBrick DME software, start with the top level part number R-VBRICK-DME-SP.
If a perpetual software license model is desired, select the sub item Distributed Media Engine
(DME). In this sub- item, you can select the type and quantity of DMEs required for the
application, as well as the software maintenance required by the perpetual software licensing
model. If multiple years of maintenance are required, the user can multiply the quantity of DMEs
times the number of years of maintenance desired. For example if the user desires two Medium
DMEs with three years of maintenance, they should order quantity 2 of the DME-M part number
for the software license, along with quantity 6 of the associated DME-M-MNT part number for
the software maintenance. Below shows the available part numbers for ordering DME and
associated maintenance in a perpetual software license model.
Below are the options available for ordering DME as an annual Subscription through Cisco
Commerce Workspace.
It is important to note that on the Cisco price list, Rev and DME are sold as software- only part
numbers, meaning both server hardware, hypervisor licensing, and operating system licensing
are not included by default and must be purchased separately or provided by the customer via
existing compute resources and software licensing.
Each component of the VBrick solution requires a different model of server when purchased
using the pre-configured server options on the Cisco price list.
When the VBrick Rev application is deployed as a Cloud or Cloud/Hybrid design, there is no
requirement for physical servers for the Rev application. A Cloud/Hybrid design would not
require physical servers for the Rev application, but would require physical servers for the DME
components inside the enterprise network.
Top Level PID VBRICK REV Solutions Plus User Tier Offers
The recommended hardware to run the VBrick DME comes in three different configurations,
corresponding to the Small/Medium/Large licensing and capacity of the DME software license.
The table below shows the part numbers for the recommended Cisco UCS hardware used to
run the VBrick DME.
The Cisco UCS hardware part numbers above do not include hypervisor licensing, nor operating
system licensing to run the VBrick Rev and DME components.
Both VBrick Rev and DME run as virtual machines and require a VMWare hypervisor installed
and licensed on the Cisco UCS hardware in order to host the VBrick Rev or DME virtual
machines. Many organizations have an existing license agreement with VMWare to provided
the required VMWare standard or higher license required for each physical Cisco UCS server.
Note that the VMWare licensing is based on the physical CPU count of the server. Cisco’s
recommended UCS server configurations for Rev and DME-Large contain two physical CPUs,
and thus require quantity two of the VMWare license. Cisco recommended UCS hardware
configurations for DME-Medium and DME-Small contain a single physical CPU and require
quantity 1 of the above license.
When the above part number is ordered, support services must be added to the configuration as
well. By default, one year of service is configured, but this length can be adjusted in Cisco
Commerce Workspace as seen below.
The VBrick DME ships as a self- contained virtual appliance with a hardened Linux-based
operating system and software application. As such, there is no consideration required for the
operating system licensing when deploying a VBrick DME.
When considering the requirements for the Microsoft Windows Server 2012 operating system
licensing, it should be noted that most organizations will have an existing agreement with
Microsoft for Windows Server licensing and will not require the operating system license to be
included as part of the Bill-of-Materials from Cisco/VBrick. In the scenario where the customer
requires a Microsoft Server license for the Rev Runtime virtual machines, VBrick can provide it
upon request.
When considering the Linux operating system for the Rev Elastic Search and Mongo virtual
machines, the recommended deployment model uses an Ubuntu based operating system for
which the Rev installation process is optimized, and does not require a specific license for the
operating system.
Note that while Red Hat Linux is supported for Rev Elastic Search and Mongo components,
Ubuntu is the recommended operating system. Only in the event that a customer requires a Red
Hat Linux operating system to be deployed (instead of Ubuntu), and has no existing licensing
contract with Red Hat, would an additional operating system license be required. In this case,
the Cisco part number for a Red Hat Linux license is shown the table below. Quantity one of this
part number is required in this scenario per each physical Rev server.
RHEL-2S2V-1A= Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 1-Yr Support Req
Example 1 – Cloud/Hybrid for 5000 users, 5 DMEs, and Cloud Access Hours
In this scenario, the customer has 5000 users, needs live and VoD content distributed to five
different locations, and wants to allow for additional anonymous usage with approximately
10,000 viewer hours per year for the anonymous usage.
It has been determined that the customer needs one DME-Large, 2 DME-Medium, and 2 DME-
Small. The customer does not have existing servers on which to run the DMEs on, so they will
purchase the recommended Cisco UCS server hardware for the DMEs. The customer has an
existing licensing agreement with VMWare, and does not need any additional VMware licenses
for the DME hardware.
The customer wants to sign up for the Rev subscription for three years, and wants the DME
components to be billed as a subscription as well, so that all costs (both Rev and DME) will be
annualized Opex, thus requiring three years of DME subscription access for the five DMEs.
Line
Item Name Description Quantity
Number
1.0 R-VBRICK-USER-SP Solutions Plus for VBRICK REV User Tiers - 1
Top Level
1.1 CL-USER-5-10K Cloud Rev User Tier 5000-9999 15000
1.2 SP-PRODUCTS-TERMS Buyer Acceptance of SolutionsPlus Terms and 1
Conditions
1.3 VBRICK-PAK VBRICK PAK for REV and DME 1
1.4 VBRICK-RTU VBrick User and DME Right to Use 1
1.5 EXT-WEBCAST-10000 Rev Subscription Public Webcast Access 1
10000 annual hours
2.0 R-VBRICK-DME-SP Solutions Plus for VBRICK REV DME - Top 1
Level
2.1 DME-S-SUB DME Small Subscription (Annual) 3
2.2 DME-M-SUB DME Medium Subscription (Annual) 6
2.3 DME-L-SUB DME Large Subscription (Annual) 6
2.4 SP-PRODUCTS-TERMS Buyer Acceptance of SolutionsPlus Terms and 1
Conditions
2.5 VBRICK-PAK VBRICK PAK for REV and DME 1
In this scenario, the customer requires that the entire solution to be deployed on- premise for
regulatory requirements. The Rev application must be highly available, and must tolerate the
failure of a single physical server.
The customer has one HQ site with 1500 users and needs a large DME at this location. They
also have 10 smaller sites with between 200-500 users, where a DME medium is required. In
total, 5000 user licenses are required.
The customer wants to purchase Cisco UCS hardware on which to run the DME software. The
customer does not have VMWare licenses available, and they must be supplied as part of the
Bill of Materials. The customer wants all components to be purchased as a perpetual license for
the software, and wants three years of service included.
The advantages of using IP multicast for enterprise video streaming include reducing network traffic by delivering a single data stream to multiple recipients simultaneously, conserving bandwidth, and supporting scalable distribution, particularly effective for live broadcasts to large audiences . However, a significant limitation is that not all network segments support multicast natively. In environments where there are non-multicast capable network segments, such as certain wireless networks or WANs without multicast support, alternative methods such as unicast fallback or tunneling are necessary . Multicast is not suitable for video-on-demand (VoD) streaming since it assumes simultaneous delivery to multiple receivers, incompatible with the varied timing and content requests of VoD . Furthermore, mobile devices typically do not support multicast, necessitating unicast solutions for mobile compatibility . Despite these challenges, multicast remains an efficient approach for distributing live video content where network conditions permit .
The integration of Akamai with Rev enhances video delivery in cloud-only deployment models by leveraging Akamai's CDN for live and on-demand streaming, providing robust global video delivery and adaptive streaming technologies that ensure consistency even on lower quality connections . Akamai's integration facilitates secure, high-quality video relay via HTTPS, and streaming is achieved through adaptive HLS over port 443, ensuring a secure and seamless viewing experience . Rev Cloud's elasticity allows it to dynamically adjust resources, enabling scalability for varying event sizes without additional configuration . Rev's authorization processes with Akamai ensure secure access to video content, preventing unauthorized sharing of playback URLs by verifying each playback request through Rev . This integration also provides benefits such as real-time reporting and analytics, enhancing user engagement and streamlining management tasks . The cloud-only model's reliance on Akamai's CDN means there’s no need for enterprise CDN deployment like the VBrick DME, making it ideal for organizations focusing on external user reach and quick deployment . However, additional bandwidth allocations may be necessary for heavy usage .
In scaling video on-demand (VoD) content distribution using eCDN functionalities in cloud hybrid deployments, challenges include ensuring content pre-positioning at various DME locations to optimize network use without overwhelming bandwidth . Considerations involve designing eCDN topology to effectively route content to viewers based on location, thus minimizing peak load impact on external networks and Akamai CDN . Moreover, eCDN setups must account for storage and access requirements driven by viewer demand variability and VoD's inherent asynchronous nature .
Single sign-on (SSO) using SAML 2.0 simplifies user access management by leveraging identity provider integrations across different VBrick Rev deployment models. In both cloud-only and cloud hybrid environments, SSO enables seamless user authentication without requiring multiple logins. In cloud-only deployments, Rev Cloud utilizes SAML 2.0 to authorize users through major identity providers, facilitating simplified user access and management . For cloud hybrid deployments, the integration extends to on-premises elements like Distributed Media Engines (DMEs), maintaining the same SSO workflow for consistent user experience across both cloud and networked environments . The setup is designed to provide secure, efficient authentication by verifying user credentials against a trusted identity provider, streamlining access while ensuring compliance with corporate security standards ."}
A fully on-premise deployment requires the customer to manage and maintain all infrastructure components, including the entire application stack on their own hardware, datastore for the video library, and network components like load balancers and Distributed Media Engines (DMEs). This setup can limit scalability due to its dependency on the physical resources provisioned initially, making it challenging to grow organically without significant hardware investments for each expansion . In contrast, cloud-only deployments leverage virtualized resources in the cloud for tasks such as transcoding and storage, providing elastic scalability that can dynamically adjust to demand without additional physical hardware . Hybrid models mix elements of both, with cloud components handling scalable, centralized tasks, and on-premise elements optimizing bandwidth at key locations, which allows for a scalable and flexible deployment strategy that bridges resource needs between cloud and local infrastructures .
Distributed Media Engines (DMEs) in a Cloud Hybrid architecture play a crucial role in bandwidth optimization and content delivery by acting as local distribution points for video content at organizational sites. They are typically deployed behind the firewall as virtual machines and are used to deliver on-demand and live video streams directly to users within the corporate network, thereby reducing the amount of bandwidth required to stream content from external sources like Akamai . DMEs enable video content to be multicast and served at the edge of the network, closest to the end-users, which helps save substantial network bandwidth, especially during live webcasts when many users access the stream simultaneously . They allow for content caching and pre-positioning, ensuring efficient delivery of video content . This local delivery, combined with global delivery from Akamai integration for external users, optimizes network usage and provides robust video delivery with adaptive streaming technologies, thus enhancing scalability for large audience webcasts . Furthermore, DMEs enable the utilization of WAN-optimization technologies such as Cisco WAAS, which can cache live or on-demand video streams at remote locations, minimizing the impact on bandwidth for subsequent viewers .
Cloud hybrid deployments differ from cloud-only deployments by integrating Distributed Media Engines (DMEs) which allow for localized content delivery, optimizing bandwidth usage at medium and large offices . Unlike cloud-only deployments, where Akamai manages streaming, cloud hybrid setups let users at sites with a DME receive streams directly from the DME while internet users use Akamai, improving delivery efficiency . Furthermore, cloud hybrid models provide better scalability for video usage, engaging both the cloud's strengths in centralized tasks and on-premise benefits for bandwidth optimization through eCDN integration .
The primary considerations for deploying a cloud-only architecture for video streaming include the ability to deliver video over the public internet to remote users and small- to medium-sized offices with adequate bandwidth . A cloud-only deployment allows for quick startup due to the economies of scale for CPU and storage-intensive tasks like transcoding and video management, with robust delivery via integration with Akamai CDN for both live and on-demand streaming . Security is ensured by using HTTPS exclusively for all communication . Customers might choose this model to take advantage of these quick deployment times and scalability benefits without needing significant on-premise infrastructure . Additionally, cloud-only models allow customers to leverage SAML 2.0 for Single Sign-On workflows, making authorization processes efficient and secure . The cloud architecture is inherently elastic, enabling dynamic scaling to meet varying demand levels, such as hosting webcasts for different sized audiences without needing configuration changes . However, customers may need to purchase additional storage and bandwidth depending on their specific usage needs .
RTMFP's limitations in diverse network environments include lack of multicast support on many Wide Area Networks (WANs) and wireless networks, which necessitates fallback to unicast streaming solutions . Furthermore, multicast streaming is unsuitable for Video-on-Demand (VoD) content since multicast relies on simultaneous transmission from a single source to many receivers, which conflicts with the unpredictable, individual access patterns of VoD . Additionally, mobile devices do not support multicast streaming, thereby requiring an alternate unicast approach for mobile users . Finally, ensuring seamless delivery requires each network component to support multicast capabilities, which can be challenging in environments lacking comprehensive multicast infrastructure ."}
Cloud-only deployments in video streaming offer elastic scalability, leveraging CDN integration such as Akamai for efficient bandwidth use, thereby facilitating simplified scalability to vast audiences without on-premise infrastructure. Licensing involves purchasing Rev Cloud user licenses or Public Webcasting hours, which provide generous but limited storage and bandwidth; additional purchases may be required as usage increases . Cloud hybrid models optimize scalability by adding VBrick Distributed Media Engines (DMEs) behind corporate firewalls, which enhance bandwidth optimization and local content distribution through eCDN integration. This setup allows content to be streamed directly from DMEs to local users, reducing bandwidth use over the wider network while still benefiting from cloud scalability for centralized tasks . Licensing in cloud hybrid models involves named user licenses for Rev Cloud and public webcast access, alongside DME licenses for local deployments. This model allows organizations to optimize usage across varying site requirements, offering scalability as needs grow and enabling efficient network resource management . Therefore, cloud hybrid models tend to offer more comprehensive scalability solutions for enterprises with varied bandwidth capacities and local distribution needs compared to cloud-only deployments.