Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, Proceedings of the 2012 ACM workshop on Capacity sharing - CSWS '12
…
6 pages
1 file
Today, content replication methods are common ways of reducing the network and servers load. Present content replication solutions have different problems, including the need for pre-planning and management, and they are ineffective in case of sudden traffic spikes. In spite of these problems, content replication methods are more popular today than ever, simply because of an increasing need for load reduction. In this paper, we propose a shared buffering model that, unlike current proxy-based content replication methods, is native to the network and can be used to alleviate the stress of sudden traffic spikes on servers and the network. We outline the characteristics of a new transport protocol that uses the shared buffers to offload the server work to the network or reduce the pressure on the overloaded links.
International Journal of Intelligent Systems and Applications, 2015
Due to the emergence of more data centric applications, the replication of data has become a more common phenomenon. In the similar context, recently, (PDDRA) a Prefetching based dynamic data replication algorithm is developed. The main idea is to pre-fetch some data using the heuristic algorithm before actual replication start to reduce latency In the algorithm further modifications (M-PDDRA) are suggested to minimize the delay in data replication. In this paper, M-PDDRA algorithm is tested under shared and output buffering scheme. Simulation results are presented to estimate the packet loss rate and average delay for both shared and output buffered schemes. The simulation results clearly reveal that the shared buffering with load balancing scheme is as good as output buffered scheme with much less buffering resources.
2006
The peer-to-peer (P2P) architecture provides support for the next generation of information sharing applications. A difficult challenge faced by these systems in the presence of non-uniform data distribution and dynamic network conditions is load sharing. This paper addresses the problem of load sharing in P2P networks across heterogeneous super-peers. We propose two load sharing techniques that use data replication to improve access performance. In the first technique, called Periodic Push-based Replication (PPR), super-peers periodically send replicas of the most frequently accessed files to remote super-peers. This effectively reduces the hop count to fetch these files. The second technique, called On-Demand Replication (ODR), performs replication based on access frequency. By performing replication on-demand, ODR provides adaptability to changes in access behavior. Extensive testing have been conducted to study the performance of the proposed techniques. The results obtained demonstrate significant performance improvements through replication.
2005
Abstract Current peer-to-peer (P2P) file-sharing systems are mostly optimized for file availability. This paper investigates P2P architecture for video streaming in general, and the performance impact of data redundancy schemes in particular. In particular, this work show that maximizing file availability is not the best strategy for video streaming as another constraint-peers' streaming bandwidth, comes into play. To address this limitation, a request-rate minimization policy is developed and evaluated using simulation.
Ninth International Conference on Parallel and Distributed Systems, 2002. Proceedings.
Content replication and distribution is an effective technology to reduce the response time for web accesses and has been proven quite popular among large Internet content providers. However, existing content distribution systems assume a store-and-forward delivery model and is mostly based on static content. This paper describes the design, implementation, and initial evaluation of a network resource management system for real-time Internet content distribution called Sago, which provides facilities to provision and allocate network resources so that multiple bandwidth-guaranteed and fault-tolerant multicast connections can be multiplexed on a single physical network. Sago includes a novel network resource mapping algorithm that takes into account both physical network topology and dynamic traffic demands, a network-wide fault tolerance mechanism that supports both node-level and link-level fault tolerance, and a hierarchical network link scheduler that provides performance protection among multicast connections sharing the same physical network link. Moreover, Sago does not require any IP multicasting support from underlying network routers because it performs application-level multicasting. The technologies underlying Sago are important building blocks for real-time content distribution networks, end-to-end quality of service guarantee over global corporate intra-nets, and application-specific adaptation of wide-area network services.
IEEE Communications Magazine, 2000
Computer Communications, 2008
In this paper, we propose an adaptive object replication algorithm for distributed network systems, analyze its performance from both theoretical and experimental standpoints. We first present a mathematical cost model that considers all the costs associated with servicing a request, i.e., I/O cost, control-message transferring cost, and data-message transferring cost. Using this cost model, we develop an adaptive object replication algorithm, referred to as Adaptive Distributed Request Window (ADRW) algorithm. Our objective is to dynamically adjust the allocation schemes of objects based on the decision of ADRW algorithm, i.e., whether the system is read-intensive or write-intensive, so as to minimize the total servicing cost of the arriving requests. Competitive analysis is carried out to study the performance of ADRW algorithm theoretically. We then implement our proposed algorithm in a PC based network system. The experimental results convincingly demonstrate that ADRW algorithm is adaptive and is superior to several related algorithms in the literature in terms of the average request servicing cost.
2008
Abstract To scale to millions of Internet users with good performance, content delivery networks (CDNs) must balance requests between content servers while assigning clients to nearby servers. In this paper, we describe a new CDN design that associates synthetic load-aware coordinates with clients and content servers and uses them to direct content requests to cached content. This approach helps achieve good performance when request workloads and resource availability in the CDN are dynamic.
IEEE/ACM Transactions on Networking …, 2002
AbstractPopular content is frequently replicated in multiple servers or caches in the Internet to offload origin servers and improve end-user experience. However, choosing the best server is a nontrivial task and a bad choice may provide poor end user experience. In contrast to ...
2009
The quality of service for latency dependent content, such as video streaming, largely depends on the distance and available bandwidth between the consumer and the content. Poor provision of these qualities results in reduced user experience and increased overhead. To alleviate this, many systems operate caching and replication, utilising dedicated resources to move the content closer to the consumer.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Transactions on Computers, 2000
Multimedia Tools and Applications, 2000
IEEE Transactions on Parallel and Distributed Systems, 2000
26th IEEE International Real-Time Systems Symposium (RTSS'05), 2005
Journal of Network and Computer Applications, 1999
2002
IEEE Wireless Communications and Networking Conference, WCNC, 2010
Computer Science Department, Rutgers University, 2002