Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011, 2011 IEEE International Symposium on Multimedia
This paper addresses the challenge of assessing and modeling Quality of Experience (QoE) for online video services that are based on TCP-streaming. We present a dedicated QoE model for YouTube that takes into account the key influence factors (such as stalling events caused by network bottlenecks) that shape quality perception of this service. As second contribution, we propose a generic subjective QoE assessment methodology for multimedia applications (like online video) that is based on crowdsourcing -a highly cost-efficient, fast and flexible way of conducting user experiments. We demonstrate how our approach successfully leverages the inherent strengths of crowdsourcing while addressing critical aspects such as the reliability of the experimental data obtained. Our results suggest that, crowdsourcing is a highly effective QoE assessment method not only for online video, but also for a wide range of other current and future Internet applications.
Proceedings of the 17th ACM international conference on Multimedia, 2009
Until recently, QoE (Quality of Experience) experiments had to be conducted in academic laboratories; however, with the advent of ubiquitous Internet access, it is now possible to ask an Internet crowd to conduct experiments on their personal computers. Since such a crowd can be quite large, crowdsourcing enables researchers to conduct experiments with a more diverse set of participants at a lower economic cost than would be possible under laboratory conditions. However, because participants carry out experiments without supervision, they may give erroneous feedback perfunctorily, carelessly, or dishonestly, even if they receive a reward for each experiment. In this paper, we propose a crowdsourceable framework to quantify the QoE of multimedia content. The advantages of our framework over traditional MOS ratings are: 1) it enables crowdsourcing because it supports systematic verification of participants' inputs; 2) the rating procedure is simpler than that of MOS, so there is less burden on participants; and 3) it derives interval-scale scores that enable subsequent quantitative analysis and QoE provisioning. We conducted four case studies, which demonstrated that, with our framework, researchers can outsource their QoE evaluation experiments to an Internet crowd without risking the quality of the results; and at the same time, obtain a higher level of participant diversity at a lower monetary cost.
Computer Science and Information Systems
This paper presents a novel web-based crowdsourcing platform for the assessment of the subjective and objective quality of experience (QoE) of the video service in the cloud-server environment. The user has the option to enter subjective QoE data for video service by filling out a web questionnaire. The objective QoE data of the cloud-server, network condition, and the user device is automatically captured by the crowdsourcing platform. Our proposed system collects both objective and subjective QoE simultaneously in real-time. The paper presents the key technologies used in the development of the platform and describes the functional requirements and design ideas of the system in detail. The system collects real-time comprehensive data to enhance the quality of the user experience to provide a valuable reference. The system is tested in a real-time environment and the test results are given in terms of the system performance. The crowdsourcing platform has new features of real-time ...
Proceedings of the 3rd workshop on Mobile video delivery - MoViD '10, 2010
The scope of this paper is the interdisciplinary measurement and modeling of Quality of Experience (QoE) related to mobile YouTube video streaming in Living Lab environment. The paper introduces the implementation of a QoE measurement framework on the Android platform and discusses results from a first study using this framework. In this respect, a multi-dimensional QoE prediction model consisting of both objective and subjective parameters is presented. In this model, the test users' evaluations of the content, picture quality, sound quality, fluidness, and loading speed of streamed videos are taken into account and related to a set of objective parameters. To our knowledge, this model is the first to include unlimited, realistic video content. We found that the content has the largest influence on the QoE of online recommend video content in mobile context.
Multimedia Tools and Applications, 2016
Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
International Journal of Advanced Trends in Computer Science and Engineering, 2021
This study aims to determine the user's satisfaction level of online streaming by using different web browsers. At the client layer, the assessment of the user's QoE is conducted by evaluating the performance of three web browsers (Google Chrome, Mozilla Firefox, and Internet Explorer). We took the subjective test by conducting different experiments with the users and ask the users to assign ratings on the provided questionnaires, and from those ratings, we calculated results in the form of Mean Opinion Score.
2011
HTTP video streaming, such as Flash video, is widely deployed to deliver stored media. Owing to TCP's reliable service, the picture and sound quality would not be degraded by network impairments, such as high delay and packet loss. However, the network impairments can cause rebuffering events which would result in jerky playback and deform the video's temporal structure. These quality degradations could adversely affect users' quality of experience (QoE). In this paper, we investigate the relationship among three levels of quality of service (QoS) of HTTP video streaming: network QoS, application QoS, and user QoS (i.e., QoE). Our ultimate goal is to understand how the network QoS affects the QoE of HTTP video streaming. Our approach is to first characterize the correlation between the application and network QoS using analytical models and empirical evaluation. The second step is to perform subjective experiments to evaluate the relationship between application QoS and QoE. Our analysis reveals that the frequency of rebuffering is the main factor responsible for the variations in the QoE.
International Journal of …, 2010
Understanding how quality is perceived by the viewers of multimedia streaming services is essential for their management. Quality of Experience (QoE) is a subjective metric that quantifies the perceived quality and therefore crucial in the process of optimizing the tradeoff between quality and resources. But accurate estimation of QoE usually entails cumbersome subjective studies that are long and expensive to execute. This paper presents a QoE estimation methodology for developing Machine Learning prediction models based on initial restricted-size subjective tests. Experimental results on subjective data from streaming multimedia tests show that the Machine Learning models outperform other statistical methods achieving accuracy greater than 90%. These models are suitable for real-time use due to their small computational complexity. Even though they have high accuracy, these models are static and cannot adapt to changes in the environment. To maintain the accuracy of the prediction models we have adopted Online Learning techniques that update the models on data from subjective viewer feedback. Overall this method provides accurate and adaptive QoE prediction models that can become indispensible component of a QoE-aware management service.
IEEE Transactions on Multimedia, 2000
Quality of Experience (QoE) in multimedia applications is closely linked to the end users' perception and therefore its assessment requires subjective user studies in order to evaluate the degree of delight or annoyance as experienced by the users. QoE crowdtesting refers to QoE assessment using crowdsourcing, where anonymous test subjects conduct subjective tests remotely in their preferred environment. The advantages of QoE crowdtesting lie not only in the reduced time and costs for the tests, but also in a large and diverse panel of international, geographically distributed users in realistic user settings. However, conceptual and technical challenges emerge due to the remote test settings. Key issues arising from QoE crowdtesting include the reliability of user ratings, the influence of incentives, payment schemes and the unknown environmental context of the tests on the results. In order to counter these issues, strategies and methods need to be developed, included in the test design, and also implemented in the actual test campaign, while statistical methods are required to identify reliable user ratings and to ensure high data quality. This contribution provides a collection of best practices addressing these issues based on our experience gained in a large set of conducted QoE crowdtesting studies. The focus of this article is in particular on the issue of reliability and we use video quality assessment as an example for the proposed best practices, showing that our recommended two-stage QoE crowdtesting design leads to more reliable results.
Multimedia streaming over HTTP has gained momentum with the approval of the MPEG-DASH standard and many research papers evaluated various aspects thereof but mainly within controlled environments. However, the actual behaviour of a DASH client within real-world environments has not yet been evaluated. The aim of this paper is to compare the QoE performance of existing DASH-based Web clients within real-world environments using crowdsourcing. Therefore, we select Google's YouTube player and two open source implementations of the MPEG-DASH standard, namely the DASH-JS from Alpen-Adria-Universitaet Klagenfurt and the dash.js which is the official reference client of the DASH Industry Forum. Based on a predefined content configuration, which is comparable among the clients, we run a crowdsourcing campaign to determine the QoE of each implementation in order to determine the current state-of-the-art for MPEG-DASH systems within real-world environments. The gathered data and its analysis will be presented in the paper. It provides insights with respect to the QoE performance of current Web-based adaptive HTTP streaming systems.
2020
Although expensive, but the most reliable measure of user perception is by direct human interaction by taking input from the user about a stimulus quality. In our previous studies, we have identified some subjects getting bored and losing focus by rating lots of video clips of small duration during subjective quality assessments. Moreover, the psychological effects, i.e. user delight, frequency of watching online videos (experience), mood, etc. must not influence the user Mean Opinion Score (MOS) for determining the quality of the shown stimuli. In this paper, we have investigated the impact of user delight, frequency of watching online video content (experience) and different mood levels on MOS for streamed video stimuli in various network conditions by subjective quality assessments. We have observed a slight tendency of better scores when the user likes the stimulus. However, our results show that if the subjective assessments are conducted by carefully following the guidelines, the users impartially rate the video stimuli solely based on the quality artifacts irrespective of their delight towards the shown content. Although, we have observed an effect of user mood on MOS ratings; for almost all the stimuli, but the results suggest the need of more detailed study; i.e. with a large and diverse set of subjects, to obtain significant statistical relevance.
Quality-of-Experience (QoE) is a human centric notion that produces the blue print of human perception, feelings, needs and intentions while Quality-of-Service (QoS) is a technology centric metric used to assess the performance of a multimedia application and/or network. To ensure superior video QoE, it is important to understand the relationship between QoE and QoS. To achieve this goal, we conducted a pilot subjective user study simulating a video streaming service over a broadband network with varying distortion scenarios, namely packet losses (0, 0.5, 1, 3,7, and 15%), packet reorder (0, 1, 5, 10, 20, and 30%), and coding bit rates (100, 400, 600, and 800 Kbps). Users were asked to rate their experience using a subjective quantitative metric (termed Perceived Video Quality, PVQ) and qualitative indicators of “experience.” Simulation results suggest a) an exponential relationship between PVQ and packet loss and between PVQ and packet reorder, and b) a logarithmic relationship between PVQ and video bit rate. Similar trends were observed with the qualitative indicators. Exploratory analysis with two objective video quality metrics suggests that trends similar to those obtained with the subjective ratings were obtained, particularly with a full-reference metric.
IEEE Transactions on Multimedia, 2013
Crowdsourcing has emerged in recent years as a potential strategy to enlist the general public to solve a wide variety of tasks. With the advent of ubiquitous Internet access, it is now feasible to ask an Internet crowd to conduct QoE (Quality of Experience) experiments on their personal computers in their own residences rather than in a laboratory. The considerable size of the Internet crowd allows researchers to crowdsource their experiments to a more diverse set of participant pool at a relatively low economic cost. However, as participants carry out experiments without supervision, the uncertainty of the quality of their experiment results is a challenging problem. In this paper, we propose a crowdsourceable framework to quantify the QoE of multimedia content. To overcome the aforementioned quality problem, we employ a paired comparison method in our framework. The advantages of our framework are: 1) trustworthiness due to the support for cheat detection; 2) a simpler rating procedure than that of the commonly-used but more difficult mean opinion score (MOS), which places less burden on participants; 3) economic feasibility since reliable QoE measures can be acquired with less effort compared with MOS; and 4) generalizability across a variety of multimedia content. We demonstrate the effectiveness and efficiency of the proposed framework by a comparison with MOS. Moreover, the results of four case studies support our assertion that the framework can provide reliable QoE evaluation at a lower cost.
2011
Increasing requirements on video quality seem to be essential while designing any video-oriented services. The methods in the user-centered design of services are fairly labor intensive and have to consider resulting value of user experience. However, user experience is a term that is currently very hard to be defined. There are different approaches to user experience assessment, which lack an ultimate method to predict expected user experience. In this article, we introduce a system that enables web service providers to measure quality of service provided to end-users while playing online video content that is approached via http progressive streaming. This tool is also suitable for future educational purposes in the field of video quality evaluation.
2021
This paper aims to improve video streaming by leveraging a simple observation: users are more sensitive to low quality in certain parts of a video than in others. For instance, rebuffering during key moments of a sports video (e.g., before a goal is scored) is more annoying than rebuffering during normal gameplay. Such dynamic quality sensitivity, however, is rarely captured by current approaches, which predict QoE (quality-of-experience) using one-size-fits-all heuristics that are too simplistic to understand the nuances of video content. Instead of proposing yet another heuristic, we take a different approach: we run a separate crowdsourcing experiment for each video to derive users' quality sensitivity at different parts of the video. Of course, the cost of doing this at scale can be prohibitive, but we show that careful experiment design combined with a suite of pruning techniques can make the cost negligible compared to how much content providers invest in content generatio...
2011
HTTP video streaming, employed by most of the videosharing websites, allows users to control the video playback using, for example, pausing and switching the bit rate. These user-viewing activities can be used to mitigate the temporal structure impairments of the video quality. On the other hand, other activities, such as mouse movement, do not help reduce the impairment level. In this paper, we have performed subjective experiments to analyze userviewing activities and correlate them with network path performance and user quality of experience. The results show that network measurement alone may miss important information about user dissatisfaction with the video quality. Moreover, video impairments can trigger user-viewing activities, notably pausing and reducing the screen size. By including the pause events into the prediction model, we can increase its explanatory power.
2014 IEEE International Conference on Communications (ICC), 2014
Since its introduction a few years ago, the concept of 'Crowdsourcing' has been heralded as highly attractive alternative approach towards evaluating the Quality of Experience (QoE) of networked multimedia services. The main reason is that, in comparison to traditional laboratory-based subjective quality testing, crowd-based QoE assessment over the Internet promises to be not only much more cost-effective (no lab facilities required, less cost per subject) but also much faster in terms of shorter campaign setup and turnaround times.
Advances in Human- …
Managing multimedia network services in a Usercentric manner provides for more delivered quality to the users, whilst maintaining a limited footprint on the network resources. For efficient User-centric management it is imperative to have a precise metric for perceived quality. Quality of Experience (QoE) is such a metric, which captures many different aspects that compose the perception of quality. The drawback of using QoE is that due to its subjectiveness, accurate measurement necessitates execution of cumbersome subjective studies. In this work we propose a method that uses Machine Learning techniques to build QoE prediction models based on limited subjective data. Using those models we have developed an algorithm that generates the remedies for improving the QoE of observed multimedia stream. Selecting the optimal remedy is done by comparing the costs in resources associated to each of them. Coupling the QoE estimation and calculation of remedies produces a tool for effective implementation of a User-centric management loop for multimedia streaming services.
2020
Crowdsourced testing is an increasingly popular way to study the quality of experience (QoE) of applications, such as video streaming and web. The diverse nature of the crowd provides a more realistic assessment environment than laboratory-based assessments allow. Because of the short life-span of crowdsourcing tasks, each subject spends a significant fraction of the experiment time just learning how it works. We propose a novel experiment design to conduct a longitudinal crowdsourcing study aimed at improving the efficiency of crowdsourced QoE assessments. On Amazon Mechanical Turk, we found that our design was 20% more cost-effective than crowdsourcing multiple one-off short experiments. Our results showed that subjects had a high level of revisit intent and continuously participated in our experiments. We replicated the video streaming QoE assessments in a traditional laboratory setting. Our study showed similar trends in the relationship between video bitrate and QoE, which conf...
Proceedings of the 7th …, 2009
Iraqi journal of science, 2018
Technological development in the last years leads to increase the access speed in the internet networks that allow a huge number of users watching videos online. Video streaming impo rtant type in the real-t ime v ideo sessions and one of the most popular applications in networking systems. The Quality of Service (QoS) techniques give us indicate to the effect of mult imedia t raffic on the network performance, but this techniques do not reflect the user perception. Using QoS and Quality of Experience (QoE) together can give guarantee to the distribution of video content according to video content characteristics and the user experience. To measure the users' perception of the quality we use Quality of Experience (Qo E) metric. Here , in comp lete we display what the QoE and QoS mean and what the difference between them, list the techniques used to measured them ; then we d isplay a study of the literature on different tools and measurement methodologies that have been proposed to measure or predict the QoE of video streaming services .
Proceedings of the 2016 ACM on Multimedia Conference - MM '16, 2016
Health care has a long history of adopting technology to save lives and improve the quality of living. Visual information is frequently applied for disease detection and assessment, and the established fields of computer vision and medical imaging provide essential tools. It is, however, a misconception that disease detection and assessment are provided exclusively by these fields and that they provide the solution for all challenges. Integration and analysis of data from several sources, real-time processing, and the assessment of usefulness for end-users are core competences of the multimedia community and are required for the successful improvement of health care systems. For the benefit of society, the multimedia community should recognize the challenges of the medical world that they are uniquely qualified to address. We have conducted initial investigations into two use cases surrounding diseases of the gastrointestinal (GI) tract, where the detection of abnormalities provides the largest chance of successful treatment if the initial observation of disease indicators occurs before the patient notices any symptoms. Although such detection is typically provided visually by applying an endoscope, we are facing a multitude of new multimedia challenges that differ between use cases. In real-time assistance for colonoscopy, we combine sensor information about camera position and direction to aid in detecting, investigate means for providing support to doctors in unobtrusive ways, and assist in reporting. In the area of large-scale capsular endoscopy, we investigate questions of scalability, performance and energy efficiency for the recording phase, and combine video summarization and retrieval questions for analysis.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Models that predict satisfaction with a service over time need to consider the impact of emotions and remembered quality of experience in predicting overall attitudes towards a service. However, prior research on subjective quality of experience has typically focused on experiments conducted in a single session or over a short period of time. Thus, there is a gap between our understanding of instantaneous quality of experience and long-term judgments, such as overall satisfaction, and likelihood to recommend and likelihood to churn. The goal of the study reported here was to carry out a longitudinal study that would provide initial insights into how experiences of service quality over time are accumulated into memories that then drive longer term attitudes about the service. Our longitudinal study was carried out over a period of roughly 4 weeks with around 3 sessions per week. To facilitate the study, an online service was constructed that would let participants search through YouT...
Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments
ITU-T standards and work are frequently referred to in this introduction, as a number of initial and ongoing work in QoE is carried out within ITU-T study group 12.
Proceedings of the 26th International Workshop on Network and Operating Systems Support for Digital Audio and Video, 2016
There is a growing interest in video performance measurements with emphasis on user experience and several initiatives have been taken to conduct active testing of real video services. A deeper understanding of the variations in media bit rate and its influence on the performance of video playback is needed in order to design better measurements. In this paper, we analyze a dataset of YouTube videos from various genres. We show statistically that most You-Tube videos can be represented sufficiently well by the first 1 to 3 minutes of the video. This eliminates the need for running longer tests when network conditions are stable as in the case of fixed networks. We test our observation in an active testing environment that measures video metrics, and recommend based on the results that such tests should run at least for one minute, however, a duration of 3 minutes will help achieve better and more stable results.
ACM SIGMETRICS Performance Evaluation Review, 2013
YouTube is changing the way operators manage network performance monitoring. In this paper we introduce YOUQMON, a novel on-line monitoring system for assessing the Quality of Experience (QoE) undergone by HSPA/3G customers watching YouTube videos, using network-layer measurements only. YOUQMON combines passive traffic analysis techniques to detect stalling events in YouTube video streams, with a QoE model to map stallings into a Mean Opinion Score reflecting the end-user experience. We evaluate the stalling detection performance of YOUQMON with hundreds of YouTube video streams, and present results showing the feasibility of performing real-time YouTube QoE monitoring in an operational mobile broadband network.
Proceedings of the 21st ACM Internet Measurement Conference, 2021
We consider the problem of inferring the latency sensitivity of user activity in the context of interactive online services. Our method relies on natural experiments, i.e., leveraging the variation in userexperienced latency seen in the normal course. At its core, our technique, dubbed AutoSens, compares the distribution of latency of the user actions actually performed with the underlying distribution of latency independent of whether users choose to perform any action. This then yields a normalized user preference based on latency. We discuss ways of mitigating various confounders and then present our findings in the context of a large online email service, Microsoft Outlook Web Access (OWA).
Proceedings of the Fourth International Workshop on Crowdsourcing for Multimedia, 2015
Crowdsourcing is a popular tool for conducting subjective evaluations in uncontrolled environments and at low cost. In this paper, a crowdsourcing study is conducted to investigate the impact of High Dynamic Range (HDR) imaging on subjective face recognition accuracy. For that purpose, a dataset of HDR images of people depicted in high-contrast lighting conditions was created and their faces were manually cropped to construct a probe set of faces. Crowdsourcing-based face recognition was conducted for five differently tone-mapped versions of HDR faces and were compared to face recognition in a typical Low Dynamic Range alternative. A similar experiment was also conducted using three automatic face recognition algorithms. The comparative analysis results of face recognition by human subjects through crowdsourcing and machine vision face recognition show that HDR imaging affects the recognition results of human and computer vision approaches differently.
2016
Downlink throughput is the most widely used and accepted performance feature within the networking community, specially in the operational field. Current network monitoring and reporting systems as well as network quality benchmarking campaigns use the Average Downlink Throughput (ADT) as the main Key Performance Indicators (KPIs) reflecting the health of the network. In this paper we address the problem of network performance monitoring and assessment in operational networks from a user-centric, Quality of Experience (QoE) perspective. While we have shown in the past that accurate QoE estimation requires measurements and KPIs collected at multiple levels of the communications stack -including network, transport, application and enduser layers, we take a practical approach and provide an educated guess on QoE using only a standard ADT-based KPI as input. We do so to maximize the utilization of throughput measurements currently collected with common network traffic monitoring systems. Armed with QoE models mapping downlink bandwidth to user experience -derived from subjective QoE lab tests, we estimate the QoE undergone by customers of both cellular and fixed-line networks, using large-scale passive traffic measurements. In particular, we study the performance of three highly popular end-customer services, including YouTube video streaming, Facebook social networking, and WhatsApp multimedia sharing. Surprisingly, our results suggest that up to 33% of the observed traffic flows might result in sub-optimal -or even poor, endcustomer experience in both cellular and fixed-line networks, for the monitored services.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.