Papers by Mohamed Hefeeda
Spider: A System for Finding Illegal 3D Video Copies
Qatar Foundation Annual Research Forum Proceedings, 2011

Proceedings of the 4th Workshop on Mobile Video, Feb 24, 2012
In recent years, there has been a tremendous growth in multimedia applications over the wireless ... more In recent years, there has been a tremendous growth in multimedia applications over the wireless Internet. The significant bandwidth requirement for multimedia services has increased the demand for radio spectrum. The scarcity of radio spectrum has challenged the conventional fixed spectrum assignment policy. As a result, cognitive radio emerged as a new paradigm to address the spectrum underutilization problem by enabling users to opportunistically access unused spectrum bands. In this thesis, we propose a framework for video transmission over cognitive radio networks. Our objective is to determine the optimal streaming policy in order to maximize the overall perceived video quality while keeping quality fluctuation at minimum. In our framework, we introduce a channel usage model based on a two-state Markov model and estimate the future busy and idle durations of the spectrum based on past observations. On the basis of this scheme, we formulate the streaming optimization problem under the constraint of the available bandwidth budget so that the optimal number of enhancement layer bits are assigned to each frame. We extend this algorithm for three different optimization levels: frame, GOP and scene. We evaluate our algorithm through extensive trace-driven simulation, and show that it improves the perceived video quality and increases bandwidth utilization.
Dynamic configuration of single frequency networks in mobile streaming systems
Proceedings of the 6th Acm Multimedia Systems Conference, Mar 18, 2015
Energy-Aware and Bandwidth-Efficient Hybrid Video Streaming Over Mobile Networks
IEEE Transactions on Multimedia, 2016
A Graphics Processing Unit Controller, Host System, and Methods
Video magnification in presence of large motions
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015
Two-Moment Analysis of a Computation's Performance
In this paper, we present a met hodology for deriving second moment performance information of so... more In this paper, we present a met hodology for deriving second moment performance information of software structures. Performance modeling of software structures comprises the Module Level of the Hierarchical Performance Model for the evaluation of distributed conventional software systems. The components utilized in building of Computation Structure Models are presented along with the mathematical models for the time cost evaluation
The IEEE Conference on Local Computer Networks 30th Anniversary (LCN'05)l, 2005
The sensing capabilities of networked sensors are affected by environmental factors in real deplo... more The sensing capabilities of networked sensors are affected by environmental factors in real deployment and it is imperative to have practical considerations at the design stage in order to anticipate this sensing behaviour. We investigate the coverage issues in wireless sensor networks based on probabilistic coverage and propose a distributed Probabilistic Coverage Algorithm (PCA) to evaluate the degree of confidence in detection probability provided by a randomly deployed sensor network. The probabilistic approach is a deviation from the idealistic assumption of uniform circular disc for sensing coverage used in the binary detection model. Simulation results show that area coverage calculated by using PCA is more accurate than the idealistic binary detection model.

Proceedings of the first annual ACM SIGMM conference on Multimedia systems - MMSys '10, 2010
We present the design of a peer-to-peer (P2P) live streaming system that uses scalable video codi... more We present the design of a peer-to-peer (P2P) live streaming system that uses scalable video coding as well as network coding. The proposed design enables flexible customization of video streams to support heterogeneous receivers, highly utilizes upload bandwidth of peers, and quickly adapts to network and peer dynamics. Our design is simple and modular. Therefore, other P2P streaming systems could also benefit from various components of our design to improve their performance. We conduct an extensive quantitative analysis to demonstrate the expected performance gain from the proposed design. Our analysis uses actual scalable video traces and realistic P2P streaming environments with high churn rates, heterogeneous peers, and flash crowd scenarios. Our results show that the proposed system can achieve: (i) significant improvement in the visual quality perceived by peers (several dBs are observed), (ii) smoother and more sustained streaming rates, (iii) higher streaming capacity by serving more requests from peers, and (iv) more robustness against high churn rates and flash crowd arrivals of peers. This paper shows that the integration of network coding and scalable video coding in P2P live streaming systems yields better performance than current systems that use singlelayer streams and proposed systems that use either network coding alone or scalable video coding alone.

2009 11th IEEE International Symposium on Multimedia, 2009
We study and analyze segment transmission scheduling algorithms in swarm-based peer-to-peer (P2P)... more We study and analyze segment transmission scheduling algorithms in swarm-based peer-to-peer (P2P) streaming systems. These scheduling algorithms are responsible for coordinating the streaming of video data from multiple senders to a receiver in each streaming session. Although scheduling algorithms directly impact the user-perceived visual quality in streaming sessions, they have not been rigorously analyzed in the literature. In this paper, we first conduct an extensive experimental study to evaluate various scheduling algorithms on many PlanetLab nodes distributed all over the world. We study three important performance metrics: (i) continuity index which captures the smoothness of the video playback, (ii) load balancing index which indicates how the load is spread across sending peers, and (iii) buffering delay required to ensure continuous playback. Our experimental analysis reveals the strengths and weaknesses of each scheduling algorithm, and provides insights for developing better ones in order to improve the overall performance of P2P streaming systems. Then, we propose a new scheduling algorithm called On-time Delivery of VBR streams (ODV). Our experiments show that the proposed scheduling algorithm improves the playback quality by increasing the continuity index, requires smaller buffering delays, and achieves more balanced load distribution across peers.
Dynamic Sharing of GPUs in Cloud Systems
2013 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum, 2013
ABSTRACT

2013 12th International Conference on Machine Learning and Applications, 2013
We propose a distributed method to compute similarity (also known as kernel and Gram) matrices us... more We propose a distributed method to compute similarity (also known as kernel and Gram) matrices used in various kernel-based machine learning algorithms. Current methods for computing similarity matrices have quadratic time and space complexities, which make them not scalable to large-scale data sets. To reduce these quadratic complexities, the proposed method first partitions the data into smaller subsets using various families of locality sensitive hashing, including random project and spectral hashing. Then, the method computes the similarity values among points in the smaller subsets to result in approximated similarity matrices. We analytically show that the time and space complexities of the proposed method are subquadratic. We implemented the proposed method using the Message Passing Interface (MPI) framework and ran it on a cluster. Our results with real large-scale data sets show that the proposed method does not significantly impact the accuracy of the computed similarity matrices and it achieves substantial savings in running time and memory requirements.

Proceedings of the 4th Workshop on Mobile Video - MoVid '12, 2012
In recent years, there has been a tremendous growth in multimedia applications over the wireless ... more In recent years, there has been a tremendous growth in multimedia applications over the wireless Internet. The significant bandwidth requirement for multimedia services has increased the demand for radio spectrum. The scarcity of radio spectrum has challenged the conventional fixed spectrum assignment policy. As a result, cognitive radio emerged as a new paradigm to address the spectrum underutilization problem by enabling users to opportunistically access unused spectrum bands. In this thesis, we propose a framework for video transmission over cognitive radio networks. Our objective is to determine the optimal streaming policy in order to maximize the overall perceived video quality while keeping quality fluctuation at minimum. In our framework, we introduce a channel usage model based on a two-state Markov model and estimate the future busy and idle durations of the spectrum based on past observations. On the basis of this scheme, we formulate the streaming optimization problem under the constraint of the available bandwidth budget so that the optimal number of enhancement layer bits are assigned to each frame. We extend this algorithm for three different optimization levels: frame, GOP and scene. We evaluate our algorithm through extensive trace-driven simulation, and show that it improves the perceived video quality and increases bandwidth utilization.

Proceedings of the 3rd Multimedia Systems Conference on - MMSys '12, 2012
We present a novel content based copy detection system for 3D videos. The system creates compact ... more We present a novel content based copy detection system for 3D videos. The system creates compact and robust depth and visual signatures from 3D videos. The system returns a score, using both spatial and temporal characteristics of videos, indicating whether a given query video matches any video in a reference video database, and in case of matching, which portion of the reference video matches the query video. Our analysis shows that the system is efficient, both computationally and storage wise. The system can be used, for example, by video content owners, video hosting sites, and third-party companies to find illegally copied 3D videos. We implemented Spider, a complete realization of the proposed system, and conducted rigorous experiments on it. Our experimental results show that the proposed system can achieve high accuracy in terms of precision and recall even if copied 3D videos are subjected to several modifications at the same time. For example, the proposed system yields 100% precision and recall when copied videos are parts of original videos, and more than 90% precision and recall when copied videos are subjected to different individual modifications such as cropping, scaling, and blurring.

Modeling and optimizing eye vergence response to stereoscopic cuts
ACM Transactions on Graphics, 2014
ABSTRACT Sudden temporal depth changes, such as cuts that are introduced by video edits, can sign... more ABSTRACT Sudden temporal depth changes, such as cuts that are introduced by video edits, can significantly degrade the quality of stereoscopic content. Since usually not encountered in the real world, they are very challenging for the audience. This is because the eye vergence has to constantly adapt to new disparities in spite of conflicting accommodation requirements. Such rapid disparity changes may lead to confusion, reduced understanding of the scene, and overall attractiveness of the content. In most cases the problem cannot be solved by simply matching the depth around the transition, as this would require flattening the scene completely. To better understand this limitation of the human visual system, we conducted a series of eye-tracking experiments. The data obtained allowed us to derive and evaluate a model describing adaptation of vergence to disparity changes on a stereoscopic display. Besides computing user-specific models, we also estimated parameters of an average observer model. This enables a range of strategies for minimizing the adaptation time in the audience.

IEEE/ACM Transactions on Networking, 2000
Peer-to-peer (P2P) file sharing systems generate a major portion of the Internet traffic, and thi... more Peer-to-peer (P2P) file sharing systems generate a major portion of the Internet traffic, and this portion is expected to increase in the future. We explore the potential of deploying proxy caches in different Autonomous Systems (ASes) with the goal of reducing the cost incurred by Internet service providers and alleviating the load on the Internet backbone. We conduct an eight-month measurement study to analyze the P2P traffic characteristics that are relevant to caching, such as object popularity, popularity dynamics, and object size. Our study shows that the popularity of P2P objects can be modeled by a Mandelbrot-Zipf distribution, and that several workloads exist in P2P traffic. Guided by our findings, we develop a novel caching algorithm for P2P traffic that is based on object segmentation, and proportional partial admission and eviction of objects. Our trace-based simulations show that with a relatively small cache size, a byte hit rate of up to 35% can be achieved by our algorithm, which is close to the byte hit rate achieved by an off-line optimal algorithm with complete knowledge of future requests. Our results also show that our algorithm achieves a byte hit rate that is at least 40% more, and at most triple, the byte hit rate of the common web caching algorithms. Furthermore, our algorithm is robust in face of aborted downloads, which is a common case in P2P systems.
Minaxi Gupta; Gregory R. Travis; David AJ Ripley; Douglous D. Pearson
Two-Moment Analysis of a Computation’s Performance
In this paper, we present a met hodology for deriving second moment performance information of so... more In this paper, we present a met hodology for deriving second moment performance information of software structures. Performance modeling of software structures comprises the Module Level of the Hierarchical Performance Model for the evaluation of distributed conventional software systems. The components utilized in building of Computation Structure Models are presented along with the mathematical models for the time cost evaluation and second-moment analysis of sequential computations.
Introduction to special section on 3D mobile multimedia
ACM Transactions on Multimedia Computing, Communications, and Applications, 2012
Uploads
Papers by Mohamed Hefeeda