QoE - QoS Measurement Framework Approach To QoE Engineering
QoE - QoS Measurement Framework Approach To QoE Engineering
MRN PG Report
QoE/QoS Measurement Framework
Approach to QoE Engineering
V1.1
Table of Contents
Table of Contents 2
Step 2: Determine QoS metrics and other factors that impact QoE 14
Defining HRX (hypothetical reference connections) 16
Step 3: Determine QoE to QoS relationship and QoE Models 18
Overview of the QoE standardization activities 19
Video QoE Models 26
Full Reference Video QoE Models 27
No Reference QoE Models 27
Interactive Video Conferencing QoE Models 29
Audio QoE Models 30
Gaming QoE Model 31
Quantitative Timeliness Agreement (QTA) 32
Step 4: Measurements monitoring telemetry methodology & specifications 36
Distributed QoS Measurement Points 37
Industry Initiatives Towards Network-as-a-Service 39
Survey of QoS Measurements 40
Step 5: Apply QoE-QoS Framework to selected Use Cases 43
Short Form Video - use case summary 43
3D Volumetric Video Telepresence – use case summary 44
Conclusions 48
References 49
This technical report results from a collaborative effort within TIP’s Metaverse Ready
Network Project Group.
The main objective of this project group is to create and develop a systematic
approach and corresponding guiding principles for designing and optimizing
multimedia networks and emerging Metaverse immersive applications to deliver an
enhanced Quality of Experience (QoE) to ensure customer and end-user satisfaction.
Xinli Hou
Connectivity Technologies and Ecosystems Manager, Meta Platforms Inc
xinlihou@[Link]
Chris Murphy
Regional CTO, EMEA, VIAVI Solutions
[Link]@[Link]
Kafi Hassan
Technology Development Strategist, T-Mobile USA
[Link]@[Link]
Javed Rahman
Technology Development Strategist, T-Mobile USA
[Link]@[Link]
Minqi Wang
Research engineer in Orange Innovation (France)
[Link]@[Link]
Kevin Smith
Distinguished Engineer, Vodafone Networks
[Link]@[Link]
Gavin Young
Head Of Fixed Access Centre Of Excellence, Vodafone Networks
gavin.young2@[Link]
Mayur Channegowda
Broadband Architect, Vodafone Networks
[Link]@[Link]
We propose a five-step top-down model that moves away from the traditional single-
service, network-centric design cycle techniques commonly used in today's networks.
Quality of Experience (QoE) refers to a system's overall performance from the user's
perspective when using a service or application. It reflects how effectively the system
enables users to achieve their goals. QoE focuses on end-user satisfaction, which
includes sound quality, video quality, and interaction speed, with a strong emphasis on
the application layer. However, QoE does not address Quality of Service (QoS) metrics
such as throughput, latency, and packet loss.
For many applications, efforts have been made to translate subjective measures of user
experience into objective metrics (e.g., ITU-G.107). These translations help define
objective requirements, including the metrics and targets that networks should meet.
Using the QoE requirements to guide network engineering and design has two
significant benefits:
1. Network design targets are grounded in user needs and experience for the
services carried, making them as attractive as possible to potential users,
2. We avoid over-engineering or under-engineering the network to ensure the
provider can deliver high-quality content without wasting resources.
We must first break down applications into their modalities and attributes to define
QoE metrics. Then, we categorize these metrics based on their relevance to specific
applications. Let’s explore the first important step: the QoE metrics, which we divided
into three subsections.
To define QoE metrics, we need a perceptual model that addresses the key factors
influencing end-user appreciation and satisfaction with a service.
System-related influencing factors are the technical aspects that engineers and
architects are generally more familiar with. They are associated with media capture,
transmission, networking, coding, storage, rendering, reproduction/display, and the
communication of information from content production to the user. Ease of use, often
referred to as User Experience (UX), is also part of this category but has human-related
aspects.
Figure 4 We have organized and identified the most important QoE factors that affect a wide
variety of applications. In the ideal world, we would have a model that accounts for all of them;
however, this is not practical, and we need to select a subset relevant to our use case.
As we define product use cases, it would be beneficial to examine the various trade-offs
and constraints related to the human, system, and context influencing factors.
Figure 5: QoE Metrics per application categories with relevant industry standards references
For instance, in video applications, video fidelity is a key aspect of the Quality of
Experience (QoE), which includes both temporal (motion artifacts) and spatial
(compression artifacts) factors. In newer services like short-form video, where content
typically lasts less than 45 seconds, timeliness and interactivity have become more
important than video fidelity, according to recent research [1].
In gaming applications, trade-offs are often made between temporal factors (video
frame rate), spatial factors (video fidelity), and interactivity (response to commands).
The type of game (context) also influences the encoding complexity and sensitivity to
latency and video distortion. Players of fast-paced games prefer to accept a reduction
in video or animation quality rather than tolerate high delays, as gameplay and player
success heavily depend on their ability to react quickly.
An important consideration for determining QoE metrics is the context, including the
scenario and environment where the application is used. This context influences end-
user expectations and is often overlooked. Factors like video types, game genres and
complexity, and the gaming platform (mobile smartphones versus dedicated
terminals) impact expectations and, consequently, the acceptability of QoE targets.
Additionally, AI-based inference engines are now widely used to provide better
recommendations aligned with user interests, particularly in short-form videos, which
makes measuring QoE objectively even more challenging.
In step 2, we will describe the method for identifying how QoE is affected by the
system level and, particularly, the most influential QoS factors.
Overall, Quality of Experience (QoE) is mainly influenced by three key metrics: (1)
Responsivity, (2) Media Fidelity, and (3) Availability. These metrics help derive Quality of
Service (QoS) factors for each major segment of the end-to-end path: client, network,
and server-side, as illustrated in Figure 6.
Figure 6: End-to-end QoS impairment factors divided by core system elements, including the
client device, network, and server side where applications and services are typically hosted.
Service), and Server on QoE and overall user satisfaction. Each segment introduces
specific Quality of Service (QoS) impairments. The management of these impairments
depends on the ownership of each segment, reinforcing the need for standardized
end-to-end guidelines provided by TIP and this working group.
Figure 7 illustrates a typical end-to-end reference connection for an end user accessing a
service on the Internet. It highlights the relevant QoS factors that impact QoE, which should be
considered in planning and optimizing to ensure service delivery quality.
Content application providers (CAPs) control certain factors, illustrated in the blue
bubbles in Figure 7, including application, network portion, and user software stack.
Communication service providers (CSPs) also manage factors, shown in the orange
bubbles, such as throttling or shaping video traffic on cellular networks. Achieving
complete end-to-end (E2E) visibility of the ecosystem is very challenging for any one
provider. Therefore, there is a strong desire to share information among the various
participants in the end-to-end ecosystem. Refer to reference SADCDN IETF reference
RFC [22].
Figure 8 HRX shows a high-level overview of the HRX approach applied in planning the virtual
reality (VR) service, focusing on the motion-to-photon latency budget allocation. To ensure the
overall network meets the QoE targets, an impairment budget planning exercise is used to
determine allowances for each network segment and the nodes within those segments. Note:
Time warping is a reprojection technique that maps the previously rendered frame to the
correct position based on the latest head orientation information and runs in parallel with the
synchronous rendering process. It translates the image by a certain number of pixels based on
changes in head position between the start of rendering and the initiation of the time warp
operation.
Figure 8 illustrates an example of how to apply HRX in the product planning phase,
where QoE engineering must be conducted, such as the example delay budget
highlighted in yellow boxes. This process defines the end-to-end (E2E) tolerable delays,
allocates per network segment where QoE could be significantly impacted, and
outlines the network requirements necessary to achieve specific QoE targets.
Additionally, it aids in understanding the trade-offs between delay and QoE impacts.
The goal is to translate the QoE requirements into lower-layer definitions (application
and network layer) to define our QoS requirements. These include metrics and targets,
as well as guidelines and rules for traffic engineering, allowable operating ranges, and
so forth. The QoS requirements are used to engineer the network so that the services
carried out will meet their QoE targets.
The industry and academic research have been actively pursuing the topic of Quality of
Experience (QoE) for several decades. In the early 2000s, there was a particular focus on
services such as VoIP, IPTV, and various over-the-top (OTT) services. QoE research
remains a vibrant field, originating from the telecommunication and multimedia
engineering domains, that aims to understand, measure, and design the quality of
experience for multimedia technologies. A summary of the numerous standardization
and implementation guidelines from this work is presented in Table 1-5.
The IEEE has a working group standard that focuses on human factors for AR/VR and
Metaverse-related applications. The IEEE Standard 3333.1.3 [2], titled “Deep Learning-
Based Assessment of Visual Experience Based on Human Factors,” identifies factors
contributing to a user’s perceptual experience, including human, system, and context
factors. The standard specifically investigates how to estimate the mechanisms of
human visual perception. The assessment of human visual perception is divided into
two subgroups: perceptual quality and VR cybersickness. To measure Quality of
Experience (QoE), the standard uses two evaluation methods: deep learning models
that consider human factors for various QoE assessments and a subjective test
methodology with a content database. For the subjective test methodology, the
standard developed an immersive VR content database to evaluate cybersickness and
the sense of presence. This VR content database is available for free download and use
in scientific research [2].
4. 3GPP TR 26.929 (2020) QoE parameters and metrics relevant to the Virtual
Reality (VR) user experience.
5. 3GPP TR 26.909 version 17.0.0 Release 17 (2022) Study on improved
streaming Quality of Experience (QoE) reporting in 3GPP services and
networks.
6. Also, it introduces an outline for QoE/QoS issues of XR-based services, the
delivery of XR in the 5G system, and an architectural model of 5G media
streaming defined in 3GPP TS 26.501 (2020)
7. 3GPP TR 26.998 version 17.0.0 Release 17 (2022) LTE; 5G; Support of 5G
glass-type Augmented Reality / Mixed Reality (AR/MR) devices
6. Others
Additionally, there are other Quality of Experience (QoE) related work performed by
other industry groups such as:
• Virtual Reality Industry Forum (VRIF)
• Moving Picture Experts Group (MPEG)
• Khronos Group the OpenXR™ Specification ([Link])
• World Wide Web Consortium (W3C)
• ITU-T SG16/Q8 Immersive Live Experiences
• ETSI Technical Committee (TC) Human Factors (HF)
• WiFi Alliance XR
Tables 1-5 below present an initial list of various standards organizations that focus on
Quality of Experience (QoE) in real-time applications, including the metaverse, voice
and video, gaming, planning aspects, and telemetry. This list is not exhaustive, and we
welcome feedback on other industry standards, areas of interest, or any anticipated
issues, such as licensing terms and royalties. The list also includes ongoing work related
to QoE in Metaverse AR/VR/XR.
Metaverse AR/VR/XR
Human and system factors, metrics, affecting the user perceived experience of
virtual reality (VR) and augmented reality (AR) services.
Service quality monitoring requirements.
Latency and synchronization aspects including motion-to-photon latency, motion-
to-sound latency, A/V synchronization.
Subjective test methodologies to evaluate aspects of QoE for 360 videos viewed in
head-mounted display.
Measurement methods to spatial audio telemeeting systems.
QoS networking level performance requirements.
Model for multimedia Quality of Service (QoS) categories from an end-user
viewpoint
Metaverse QoE requirements development.
Video
Objective parametric quality assessment model to predict the impact of audio and
video media encodings and observed IP network impairments on QoE in
multimedia streaming applications.
Measurement approaches, diagnostic analysis and KPIs/KQIs for video-based
services, including video, audio quality estimation and quality integration.
Methodology to conduct subjective quality assessment of multi-party telemeeting
Audio
The E-model ITU G.107 offers a standard method for prediction and planning of
telecom networks. An analytical tool for estimating End-to-End VoIP conversation
quality across networks, considers a wide range of impairments including coded
type, packet loss, delay, echo etc. Useful for transmission planning tools, to assess
VoIP audio performance, establish benchmark networks for comparison, and
compare design alternatives.
1. ITU-T G.107 (2016) – The E-model: a computational model for use in transmission planning
2. ITU-T G.109 (1999) – Definition of categories of speech transmission quality
3. ITU-T P.1305 (2016) – Effect of delays on telemeeting quality
4. ITU-T P.1310 (2017) - Spatial audio meetings quality evaluation
5. ITU-T G.114 (2003) – General Recommendations on the transmission quality for an entire international
telephone connection
The model is a network planning tool which can be used by various stakeholders
for purposes such as resource allocation and configuration of IP-network
transmission settings such as the selection of resolution and bitrates, under the
assumption that the network is prone to packet loss, throughput and latency.
1. ITU-T G.1072 (2020) – Opinion model predicting gaming quality of experience for cloud gaming services
2. ITU-T G.1032 (2017) – Influence factors on gaming quality of experience
3. IEEE P2948/P2949 – Recommended practice for the evaluation of cloud gaming user experiences
1. ITU-T [Link] – Diagnostic assessment of QoS and QoE for adaptive video streaming sessions
2. Broadband Forum PEAT – Performance, Experience and Application Testing
3. Broadband Forum QED – Quality Experience Delivered
4. BBF TR-452.2 – Quality Attenuation Measurements using Active Test Protocols
5. IETF IOAM – In-Situ flow and on-path telemetry
6. MEF 23.2 (2016) – Carrier Ethernet Class of Service
7. ITU-T Y.1541 (2011) Network performance objectives for IP-based services
8. ITU-T GSTR-5G QoE (2022) – Quality of experience (QoE) requirements for real-time multimedia services over
5G networks
9. ITU-T [Link]-5G (2024) – QoE factors for new services in 5G network
10. ITU-T [Link] – Computational model used as a QoE/QoS monitor to assess video telephony services
11. ITU-T J.1631 Functional requirements of E2E network platforms to enhance the delivery of cloud-VR services
To define QoE requirements for services, user perception models of quality are often
used. Several approaches to QoE modeling exist, and these are typically divided into
three broad categories:
• subjective
• objective
• hybrid
On the other hand, objective models can be digitally automated and deployed at scale.
We will focus the next part of the discussion on these models.
Parametric QoE models are commonly used for product design, live network quality
monitoring, and product development planning. They help assess the impact of
interventions on Quality of Experience (QoE). Telecom operators and vendors are major
users in the planning phase of deploying network services for multimedia options such
as VoIP and video streaming.
Figure 10a Compilation of No reference video QoE models from academia, private and industry
standards
So far, the industry and academic research has focused on either video fidelity (left
side) or timeliness and interactivity (right side) of Figure 10. An ideal composite QoE
metric would incorporate both aspects, but it is not currently widely available. The QoS
factors listed in Figure 10 are an example of what influences these QoE models.
Figure 10b Composite short form video QoE model incorporating both video fidelity aspect as
well as timeliness for loading/rebuffering
There are a few challenges regarding the lack of reference video metrics. These issues
are being addressed but have not yet been resolved, namely:
• There are no sufficiently accepted consolidated composite metrics in the
industry that include both the fidelity aspect and timeliness for short-form video
services [18].
• There are no recommendations or standards that define common testing
methodologies.
• they are still limited in their accuracy and not yet universally adopted.
Active research is ongoing to understand and model the impact of loading time [1] [17]
and rebuffering in short-form video applications. The big challenge is in the No no-
reference models [18] [19] where more research is needed, and industry standardization
and adoption of commonly accepted QoE models are required.
MOS standards for interactive audio-video applications have yet to be defined, but
researchers are beginning to show interest in this area. Technically, the research paper
[8] reveals the differing impacts of various protocols used for video streaming (DASH)
compared to protocols for real-time services (WebRTC). Subjectively, the study [9] uses
algorithms to analyze facial and speech features to assess the MOS of audio-visual
conversations. Some conferencing application providers have researched user needs to
prioritize the performance of specific modalities, such as audio and screen sharing,
over others, like video quality [10]. We believe that by using the MOS indicator, a more
optimized strategy can be applied across all network segments to enhance user
experience.
R = Ro - Is - Id - Ie + A
Ro = the basic signal-to-noise ratio based on send, receive loudness, electrical, and background noise
Is = real-time impairment factor, e.g., loudness, sidetone, and quantizing distortion
Id = impairment from delay factors: e.g., talker echo, listener echo, and all delay (packetization, de-jitter,
etc…)
Ie = the equipment impairment factor for special equipment: e.g., codecs, loss concealment algorithm,
loss distribution, burst (determined subjectively for each codec, for each % packet loss)
A = the Advantage factor, an adjustment for the advantage of access, e.g., mobile devices
Proper control of these four parameters ensures satisfactory end-user voice quality and
therefore provides good QoE.
Figure 11 ITU-T G.107 QoE Vs QoS (User Sensitivity Latency). The parameters used in the model
are shown here, the model has an interesting features to account for the user profile in terms of
sensitivity to delay, eg a business conversation vs casual
For gaming applications, cloud gaming requires more stringent overall network
performance, namely bandwidth, latency (critical), and packet loss control, because
rendering is partially done in the cloud. The game type impacts encoding complexity
and sensitivity to delay and frame losses.
latency and loss of the packets transporting the application. A cumulative distribution
function (CDF) can capture these in a unified way. Expressing application requirements
(of the network) in this way is known as a “Quantitative Timeliness Agreement” or QTA.
Thresholds on the CDF can be useful for expressing network capability (end-to-end and
per link), application requirements, and even Service Level Agreements (SLAs).
However, the exact “threshold” (e.g., 99%, 99.5%, 99.9% …, etc.) varies by application, for
example, control plane vs. user/data plane traffic.
Figure 13 Quantitative Timeliness Agreement” or QTA example. Source Broadband Forum MR-
452.2
In the QTA example in Figure 13, the blue line indicates that 50% of packets should
arrive within 5 ms and 95% within 10 ms, with a packet loss rate of 0.5%. The black line
depicts the measured network performance as a cumulative distribution function
(CDF). This means the timeliness requirement is satisfied since it is to the left (i.e.,
better than) the specified requirements CDF. If it were not happy, the application’s
outcome would be at risk of not meeting the quality of experience (QoE) expectation.
Table 6 presents a set of QoS metrics from developers working with Vodafone for
online gaming.
Table 7 below illustrates how developers’ requirements are represented as QTAs for
Video on Demand.
Quality of Outcome (QoO) quantifies the gap between application requirements and
actual measured network performance within the QTA. This allows application
developers to understand the quality users can expect during a network session. If
needed, they can adapt application behavior to optimize user experience based on
network constraints. Rather than using calculus to calculate the area between required
and measured performance, QoO approximates this by analyzing key percentiles in the
CDF. It measures how close the performance is to a threshold that ensures a great
application outcome versus one that results in a poor outcome. Ultimately, it simplifies
this into a percentage that quantifies the probability of a successful application
outcome and, therefore, the user experience (QoE).
QTA and QoO are detailed in the following Broadband Forum (BBF) and Internet
Engineering Task Force (IETF) references:
BBF: [Link]
IETF: [Link]
[Link]
Networks are becoming more complex. Disaggregation in the Radio Access Network
means that more components can suffer impairments that impact QoE. The networks
are also more dynamic, with network functions and service delivery components
spinning up in different locations depending on demand and conditions. New radio
technologies mean that radio resource control can fail in new ways. These
technological evolutions have many benefits, but they also come with the challenge of
detecting problems, pinpointing them when they occur, and knowing how to fix them.
Another trend is the increased richness and immersion of many Metaverse services.
These services are becoming more susceptible to temporary issues; even a slight delay
can greatly impact the Quality of Service (QoS) and Quality of Experience (QoE) of
specific Metaverse applications. Traditional network monitoring collects Key
Performance Indicators (KPIs) over time, which can mask temporary impairments that
affect QoE during these collection intervals. More detailed measurement aggregation
can help in detecting these transient issues. Furthermore, impairments must be
quickly identified to ensure fast resolutions and minimal impact on Service Level
Agreements (SLAs). Consequently, lower latency in collecting and analyzing QoS
measurements is likely necessary. The increased data needed for swiftly identifying
issues can lead to high costs associated with generating, managing, and analyzing this
data. Possible solutions include distributed QoS analysis to identify key service metrics,
along with anomaly detection to trigger detailed QoS measurements only when
necessary for diagnostics.
Figure 14: types of measurements and measurement points mapped onto a mobile network
architecture.
Figure 14 illustrates the types of measurements and endpoints mapped onto a mobile
architecture. The measurements may relate to a specific service user, a group of service
users (such as users of a network slice), a flow between two endpoints, or aggregated
data on a connection. Measurements can be taken at specific points in the network,
such as the mobile endpoint device, the radio link, the RAN network functions, the
transport layer, the core network, or the application server. They may focus on a specific
point in the network or involve coordinated measurements between two or more
locations (for instance, using two-way active management protocol (TWAMP)).
Figure 15: QoS impairment factors mapped to a Metaverse architecture highlight the main
contributors, including end-user devices (capture and replay), network, and edge/cloud
computing. Telemetry at both the application and network levels is becoming a strategic and
essential metric for traffic, customer retention, operations, troubleshooting, and capacity
planning. Transport protocols like QUIC and BBR impact traffic volume efficiency.
As a result, rich immersive services can be built on top of NaaP, ensuring that the QoS
characteristics defined by the target QoE for that service are met. This area is evolving
with the GSMA Open Gateway initiative, which develops and publishes APIs through
the Linux Foundation CAMARA open-source project. This effort produces a set of APIs
that network operators can use for network interaction. For instance, the CAMARA
Connectivity Insights subproject enables developers to request performance-related
information about a network's capability to meet specific SLAs through a standardized
API.
As the industry converges on NaaP APIs, shaping the resulting APIs can lead to more
successful delivery of Metaverse services. For example, a network API that exposes the
QoO for a particular application requirement would quantify the probability of a
successful application outcome in terms of the target QoE for that network
connection. Based on this, a decision could be made on whether to offer that service in
general or on a specific occasion.
The active measurement method tracks the behavior of applications and end-users in
real-time to determine network quality. This measurement involves injecting test
traffic at various network points to monitor user or application traffic and measure its
performance. Because test traffic mimics service traffic, active testing is ideal for
providing a real-time view of end-to-end performance concerning latency (delay), jitter
(delay variation), and packet loss. It helps segment the network, providing an end-to-
end view, and validating and reporting on varying network path characteristics.
Examples of active probing include Ping, Traceroute, TWAMP Light, STAMP, IRTT,
varying latency under load tests, and simulating real traffic.
Passive measurements involve capturing and observing live traffic between hosts and
ITU-T Y.1540, 3GPP TS 26.234, ITU-T P.1203, and IETF RFC 6703 are some notable
application QoE measurement standards that provide guidelines for active and passive
monitoring.
Figure 17: BBF TR 452 Quality Attenuation: The G (Geographic) component is related to
propagation delay, which is determined by physical distance and the speed of light. The S
(Serialization) component arises from clocking packets in and out of network nodes. The V
(Variable) component is due to queuing, buffering, and scheduling, which are impacted by
network load.
Different from traditional video streaming (YoutubeTM, NetflixTM …), new features
brought by SFV services impact QoE in various degrees:
• Short video contents are pre-loaded according to the recommendation
algorithm. An optimized pre-loading strategy should be applied to satisfy the
user experience with reasonable loading time while not over-preloading the
contents that cause network congestion, thus impacting QoE.
• End-users frequently scroll or swipe their screens in a short period, even if they
haven’t finished watching the entire video. Therefore, Quality of Experience
(QoE) models should consider the length of the video, as users will be more
sensitive to initial loading times and buffering events when the video’s content is
brief.
Audio quality Audio MOS Codec type, Bit rate, PESQ, POLQA, E-model R
factor ITU G.107
Timeliness Click to play time (CTPT) Measured on the client, the interval between the
time when a user click a video and the time
when the video starts to play on the screen
Play success rate n (PSRn) Percentage of SFV views which has a CTPT less
than n seconds
Stalls Measured on the client side per viewing session
by some/combination of
(1) number of stalls (longer than xxx ms)
(2) total time of stalls (milliseconds)
(3) meantime between stalls during a
session
Temporal quality Fluidity Measure on client by number of frames per
second
Synchronicity Measured on the client side per viewing session
by some/combination of
(1) numbers of audio/video out-of-synch
(2) total time of audio/video out-of-synch
(3) meantime between audio/video out-of-
synch
Spatial quality Video fidelity No Reference: Under Development
Full Reference: PSNR, SSIM, or VMAF
Context** Client Device, Location Display resolution and audio fidelity, mobile or
stationary, network type
Human factors User rating Users are asked, during or after their viewing, to
rate their satisfaction in scale of 1 - 5
Content interest
Table 8 : QoE metrics under consideration for short form video
** The context may impact on target values of QOE metrics for what an acceptable/good/excellent QoE is but not as a QoE metric.
Short-form video requires substantial backend effort, including streaming and the AI
inference engine. This setup ensures smooth playback and provides the content that
users enjoy, helping to maintain their attention and retention. The network is crucial for
these aspects, and monitoring at both the application and network levels has become
a strategic metric for traffic analysis, customer retention, operations, troubleshooting,
and capacity planning. Furthermore, transport protocols like QUIC and BBR affect
traffic volume efficiency and should also be considered when delivering these services.
such as volumetric video data and network performance levels that allow for scalability,
these digital spaces are filled with content that can be fully interacted with. In contrast,
2D video conferencing struggles with disconnects, such as a lack of detail or difficulty
in achieving proper perspective, making capturing a narrative's essence challenging.
This often leads to inadequate productivity and emotional fulfillment.
The following tables present the initial Quality of Experience (QoE) requirements for
this implementation. These requirements cover system components and various QoE
indicators aimed at positively influencing user experience. The proposed values mainly
focus on providing satisfactory experience levels. Advanced values, where proposed,
are based on industry trends and reflect highly desirable QoE levels. Testing and data
collection will further identify or validate these advanced values.
Usability
Usability / User-Behavior
Table 9: QoE Influencing Requirements for the “Consumption” Stage of Volumetric Video-based
Live and Real-Time Telepresence Use Case. Source: T-Mobile Research
Transmitting volumetric video content requires large amounts of data. Therefore, live
streaming this content needs a network capable of handling a moderately large
bandwidth necessary for multiple streaming objects within a scene. Fast and efficient
compression algorithms are essential, and this network should also provide low latency
to enable rapid interaction.
Conclusions
Delivering an improved product user experience (QoE) will ensure customer
satisfaction and market success while enhancing the ecosystem and value proposition
for all partners. For any multimedia service to succeed, it is essential to plan from an
end-to-end perspective to identify the critical QoE and QoS factors that influence
success. A proposed top-down, user-centric design approach includes an end-to-end
system that establishes perceptually based QoE targets and maps the corresponding
QoS impacts specific to different use cases. Additionally, it features standardized and
practical methods for ongoing measurement and monitoring of user impact on quality
aspects.
Using QoE requirements to guide network engineering and design offers two key
benefits: (1) network design targets focus on user needs and experiences, making
services more appealing to potential users; (2) it prevents over- or under-engineering,
enabling providers to deliver high-quality content efficiently. QoE is also connected to
application and network layer attributes and their associated QoS through a
framework utilizing HRX models, which we reviewed in detail within the top-down
framework.
References
[1] QUTY: Towards Better Understanding and Optimization of Short Video
Quality. H Zhang, Y Ban, Z Guo, Z Xu, Q Ma, Y Wang, X Zhang, Proceedings of
the 14th Conference on ACM Multimedia Systems, 2023
[2] [Link]
[3] QUALINET White Paper on Definitions of Immersive Media Experience (IMEx),
Perkis, A., Timmerer, C., et al.,European Network on Quality of Experience in
Multime This technical report results from a collaborative effort within TIP’s
Metaverse Ready Network Project Group
[4] ITU-T P.1301 Subjective quality evaluation of audio and audiovisual multiparty
telemeetings (2017)
[5] ITU-T P.1305 Effect of delays on telemeeting quality (2016)
[6] ITU-T P.1310 Spatial audio meetings quality evaluation (2017)
[7] ITU-T P1312 Method for the measurement of the communication effectiveness
of multiparty telemeetings (2016)
[8] Y. Maehara et T. Nunome, « WebRTC-Based Multi-View Video and Audio
Transmission and its QoE », in 2019 International Conference on Information
Networking (ICOIN), janv. 2019, p. 181-186. doi
[9] G. Bingöl, S. Porcu, A. Floris, et L. Atzori, « QoE Estimation of WebRTC-based
Audio-visual Conversations from Facial and Speech Features », ACM Trans.
Multimedia Comput. Commun. Appl., p. 3638251,
[10] Optimizing Performance of Zoom in Low Bandwidth Environments
[11] UVQ - Measuring YouTube's Perceptual Video Quality, 2022 Yilin Wang, Staff
Software Engineer, YouTube and Feng Yang, Senior Staff Software Engineer,
Google Research
[12] Two-level approach for no-reference consumer video quality assessment, J
Korhonen, IEEE Transactions on Image Processing, 2019
[13] Efficient Measurement of Quality at Scale in Facebook Video Ecosystem, SPIE
Optics + Photonics, Meta Research 2020
[14] Quantifying the value of 5G and edge cloud on QoE for AR/VR. B Krogfoss, J
Duran, P Perez. 2020. [Link]
[15] A survey on QoE-oriented VR video streaming: Some research issues and
challenges. J Ruan, D Xie - Electronics, 2021 - [Link]
[16] Quality of experience in telemeetings and videoconferencing: a
comprehensive survey. J Skowronek, A Raake, GH Berndtsson, OS
Rummukainen, P Usai, SNB Gunkel. 2022 [Link]
[17] On additive and multiplicative QoS-QoE models for multiple QoS
parameters. T Hossfelt, L Skorin-Kapov, PE Heegaard. 2016 -
[Link]
[18] Why no reference metrics for image and video quality lack accuracy and