Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998, IEEE Network
The success of the World Wide Web has led to a steep increase in the user population and the amount of traffic on the Internet. Popular Web pages create "hot spots" of network load due to their great demand for bandwidth and increase the response time because of the overload on the Web servers. We propose the distribution of very popular and frequently changing Web documents using continuous multicast push (CMP). The benefits of CMP in the case of such documents are a very efficient use of network resources, a reduction of the load on the server, lower response times, and scalability for an increasing number of receivers. We present a quantitative evaluation of the continuous multicast push for a wide range of parameters. that change frequently, a continuous multicast push (CMP) can offer significant benefits in terms of load reduction, bandwidth savings, reduced response times, and scalability for the growing number of receivers. We view CMP as one of the three complementary delivery options integrated in a Web server: Unicast, AMP, and CMP: Unicast Pull: The document is sent as a response to a user's pull request. This delivery method is used for documents that are rarely requested. Asynchronous Multicast Push (AMP): Requests for the same document are accumulated over a small time interval, and answered together via multicast. This delivery method is used for popular documents that have multiple requests over limited period of time. Continuous Multicast Push (CMP): A document is continuously multicasted on the same multicast address. This delivery method is used for very popular documents that change very frequently and that are not worth caching. A Web server using CMP continuously multicasts the latest version of a popular Web document on a multicast address (every document is assigned a different multicast address). Receivers tune into the multicast group for the time required to reliably receive the document and then leave the group. In order to achieve a reliable document delivery, we propose the use of a forward error correction code (FEC) in combination with cyclic transmissions (see Section 3.4).
Computer Networks and ISDN Systems, 1998
We consider two schemes for the distribution of popular Web documents. In the first scheme the sender repeatedly transmits the Web document into a multicast address, and receivers asynchronously join the corresponding multicast tree to receive a copy. In the second scheme, the document is distributed to the receivers through a hierarchy of Web caches. We develop analytical models for both schemes, and use the models to compare the two schemes in terms of latency and bandwidth usage. We find that except for documents that change very frequently, hierarchical caching gives lower latency and uses less bandwidth than multicast. For rapidly changing documents, multicast distribution reduces latency, saves network bandwidth, and reduces the load on the origin server. Furthermore, if a document is updated randomly rather than periodically, the relative performance of CMP improves. Therefore, the best overall performance is achieved when the Internet implements both solutions, hierarchical caching and multicast.
2002
Abstract To serve asynchronous requests using multicast, two categories of techniques---stream merging and periodic broadcasting---have been proposed. For sequential streaming access, where requests are uninterrupted from the beginning to the end of an object, these techniques are highly scalable: the required server bandwidth for stream merging grows logarithmically as request arrival rate, and the required server bandwidth for periodic broadcasting varies logarithmically as the inverse of start-up delay.
Proceedings ISCC 2002 Seventh International Symposium on Computers and Communications, 2002
The competition on clients attention requires sites to update their content frequently. As a result, a large percentage of web pages are semi-dynamic, i.e., change quite often and stay static between changes. The cost of maintaining consistency for such pages discourages caching solutions. We suggest here an integrated architecture for the scalable delivery of frequently changing hot pages. Our scheme enables sites to dynamically select whether to cyclically multicast a hot page or to unicast it, and to switch between multicast and unicast mechanisms in a transparent way. Our scheme defines a new protocol, called httpm. In addition, it uses currently deployed protocols, and dynamically directs browsers seeking for a URL to multicast channels, while using existing DNS mechanisms. Thus, we enable sites to deliver content to a growing amount of users at less cost and during denial of service attacks, while reducing load on core links. We report simulation results that demonstrate the advantages of the Integrated architecture, and its significant impact on server and network load, as well as clients delay.
Proceedings. 19th IEEE International Conference on Distributed Computing Systems (Cat. No.99CB37003)
The publish/subscribe (or pub/sub) paradigm is a simple and easy to use model for interconnecting applications in a distributed environment. Many existing pub/sub systems are based on pre-defined subjects, and hence are able to exploit multicast technologies to provide scalability and availability. An emerging alternative to subject-based systems, known as content-based systems, allow information consumers to request events based on the content of published messages. This model is considerably more flexible than subject-based pub/sub, however it was previously not known how to efficiently multicast published messages to interested content-based subscribers within a network of broker (or router) machines. This shortcoming limits the applicability of content-based pub/sub in large or geographically distributed settings. In this paper, we develop and evaluate a novel and efficient technique for multicasting within a network of brokers in a content-based subscription system, thereby showing that content-based pub/sub can be deployed in large or geographically distributed settings.
International journal of engineering research and technology, 2018
In today's world, HTTP driven Internet traffic forms the backbone of information and content transfer around the globe. Content Delivery Networks(CDN) offer these systems the service of distributing this content wherever needed. However, these systems are often plagued with content delivery failures due to various constrains of CDN such as edge server failure/congestion. We therefore explore a browser based peerassisted design to resolve content delivery failures. WebRTC is a novel way to deliver web content, but has the added complexity of handling dynamic resources and handling them efficiently. A fall back mechanism can be formulated to handle server and network failures by enforcing the protocols to support efficient and optimized content delivery. HTTP based services are currently used to deliver various rich media which has caused increase in the bandwidth load on the CDN and it is usually the cause of content delivery failures. The browser based peer-assisted scheme aims to reduce this load by leveraging the bandwidth of connected users to serve content to newer users. This method needs optimization to ensure that the most appropriate peer is selected. It suffers from problems that the conventional P2P method does not, such as peer disconnection as soon as page is loaded.
ACM SIGCOMM Computer …, 2005
This paper introduces large scale content distribution protocols, which are capable of scaling to massive numbers of users and providing low delay end-to-end delivery. Delivery of files and static objects is described, with real-time content streaming being outside the scope of this paper. The focus is on solutions provided by the IETF Reliable Multicast Transport Working Group. More precisely, the paper explains FLUTE, ALC and the associated building blocks. Then it discusses how these components are used in the Multimedia Broadcast Multicast Service (MBMS) for 3G systems and in the IP Datacast (IPDC) service for Digital Video Broadcast for Handheld devices (DVB-H).
2000
Despite advances in networking technology, the limitation of the server bandwidth prevents multimedia applications from taking full advantage of next-generation networks. This constraint sets a hard limit on the number of users the server is able to support simultaneously. To address this bottleneck, we propose a Caching Multicast Protocol (CMP) to leverage the in-network bandwidth. Our solution caches video streams in the routers to facilitate services in the immediate future. In other words, the network storage is managed as a huge \video server" to allow the application to scale far beyond the physical limitation of its video server. The tremendous increase in the service bandwidth also enables the system to provide true on-demand services. To assess the e ectiveness of this technique, we d e v elop a detailed simulator to compare its performance with that of our earlier scheme called Chaining. The simulation results indicate that CMP is substantially better with many desirable properties as follows: it is optimized to reduce tra c congestion (2) it uses much less caching space (3) client w orkstations are not involved in the caching protocol (4) it can work on the network layer to leverage modern routers.
Computer Communications, 1999
The exploding Internet has brought many novel network applications. These include teleconferencing, interactive games, the voice/video phone, real-time multimedia playing, distributed computing, web casting, and so on. One of the specific characteristics of these applications is that all involve interactions among multiple members in a single session. Unlike the traditional one-to-one message transmission (unicasting), if the underlying networks provide no suitable protocol supports, these applications may be costly and infeasible to implement.
There is an increasing demand for using today's shared computer networks, such as the Internet, for group-based applications. Multicast is a developing communication technology designed to provide e cient m ulti-point message delivery for large-scale groups. Research directed at the data link and network layers has been very successful, and multicast service is now a vailable in most best-e ort networks. However, best-e ort multicast networks do not o er quality of service guarantees such as bounded transmission delays and error rates. Therefore, group-based applications rely on multicast transport protocols for ordering, reliability, group management, and other end-to-end services. This dissertation presents MESH: a novel, distributed transport protocol designed for large-scale multicast. We show that MESH's error recovery and receiver feedback service 1 achieves high application performance i.e., low delivery latency and high throughput, 2 e ciently utilizes network and end-system resources, 3 provides a exible error control model suitable for reliable, unreliable, and other error control paradigms, 4 provides timely state information required by congestion, ow, group management, reliability, and other control protocols, and 5 scales to large receiver sets and wide-area heterogeneous networks. Using the MESH framework, we design and implement a reliable protocol MESH-R and a deadline-driven reliable protocol MESH-M in a high-delity simulation of SURAnet and vBNSnet. We show that the performance and overhead of MESH-R and MESH-M compares favorably to extant transport techniques namely, centralized, tree-based, unstructured, and FEC-based schemes for bulk-data distribution and continuous media applications. iii To T ricia, Danielle, Justin, and the little one on the way Contents Abstract iii the development of this work. Alf recognized the importance and challenge of wide-area multicast, and gave me the freedom to explore my o wn solutions. I sincerely appreciate his support, insight, direction, and wisdom. Special thanks to Bert Dempsey and Dallas Wrege. Bert is an exceptional researcher and friend. I thoroughly enjoyed my time writing papers and exploring networking ideas with him. Likewise, special thanks to Dallas a.k.a. Dr. Figure for his insights on tra c modeling, and expertise in TeX. It has been a pleasure working with him. Thanks goes to J org Liebeherr, William Wulf, Andrew Grimshaw, and James Aylor who served on my dissertation committee and provided valuable feedback. It is an honor to have this prestigious faculty review and accept my w ork. I am also grateful to Paco Hope for helping me with the compute servers and Sudhir Srinivasan for teaching me SES Workbench optimizations. I wish all of you Alf, Bert, Dallas, Jorg, Bill, Andrew, Jim, Paco, and Sudhir the very best of luck i n y our endeavors. My parents, Diane and Jerome Lucas, deserve special recognition for stressing the importance of education. Their academic accomplishments, as well as their support, have provided me continuous inspiration. Finally, thanks goes to my wife Tricia for supporting me in my academic pursuits. Tricia is a wonderful friend, wife, mother, and partner to tolerate and encourage the long hours required to complete this work.
IEEE Transactions on Mobile Computing, 2004
There has been a surge of interest in the delivery of personalized information to users (e.g. personalized stocks or travel information), particularly as mobile users with limited terminal device capabilities increasingly desire updated, targeted information in real time. When the number of information recipients is large and there is sufficient commonality in their interests, as is often the case, IP multicast is an efficient way of delivering the information. However, IP multicast services do not consider the structure and semantics of the information in the multicast process. We propose the use of Content-Based Multicast (CBM) where extra content filtering is performed at the interior nodes of the IP multicast tree; this will reduce network bandwidth usage and delivery delay, as well as the computation required at the sources and sinks.
Hosting a Web site at a single server creates performance and reliability issues when request load increases, availability is at stake, and, in general, when quality-of-service demands rise. A common approach to these problems is making use of a content delivery network (CDN) that supports distribution and replication of (parts of) a Web site. The nodes of such networks are dispersed across the Internet, allowing clients to be redirected to a nearest copy of a requested document, or to balance access loads among several servers. Also, if documents are replicated, availability of a site increases. The design space for constructing a CDN is large and involves decisions concerning replica placement, client redirection policies, but also decentralization. We discuss the principles of various types of distributed Web hosting platforms and show where tradeoffs need to be made when it comes to supporting robustness, flexibility, and performance.
Proceedings of the 20th IEEE Instrumentation Technology Conference (Cat No 03CH37412) EURMIC-03, 2003
The dramatic growth of the Internet and of the Web traffic calls for scalable solutions to accessing Web documents. To this purpose, various caching schemes have been proposed and caching has been widely deployed. Since most Web documents change very rarely, the issue of consistency, i.e. how to assure access to the most recent version of a Web document, has received not much attention. However, as the number of frequently changing documents and the number of users accessing these documents increases, it becomes mandatory to propose scalable techniques that assure consistency. We look at one class of techniques that achieve consistency by performing automated delivery of Web documents. Among all schemes imaginable, automated delivery guarantees the lowest access latency for the clients. We compare pull-and pushbased schemes for automated delivery and evaluate their performance analytically and via trace-driven simulation. We show that for both, pull-and push-based schemes, the use of a caching infrastructure is important to achieve scalability. For most documents in the Web, a pull distribution with a caching infrastructure can efficiently implement an automated delivery. However, when servers update their documents randomly and servers cannot ensure a minimum time-to-live interval during which documents remain unchanged, pull generates many requests to the origin server. For this case, we consider push-based schemes that use a caching infrastructure and we present a simple algorithm to determine which documents should be pushed given a limited available bandwidth.
A major problem on the Internet is the scalable dissemination of information. This problem is particularly acute exactly at the time when the scalability of data delivery is most important. One proposed solution to this scalability problem is to use multicast communication. However, allowing multicast communication introduces many nontrivial data management problems, such as caching, consistency, and scheduling. We have built a middleware that unifies and extends state-of-the-art data management methods and algorithms into one software distribution. Its flexible and extensible architecture is built from individual components that can be selected or replaced depending on the underlying multicast transport mechanism or on the application needs. Particular care has gone into the design of the algorithms to optimize the user-perceived level of service. We demonstrate our middleware within the context of the RODS application.
Organization, 2007
Large-scale, real-time multimedia distribution over the Internet has been the subject of research for a substantial amount of time. A large number of mechanisms, policies, methods and schemes have been proposed for media coding, scheduling and distribution. Internet Protocol (IP) multicast was expected to be the primary transport mechanism for this, though it was never deployed to the expected extent. Recent developments in overlay networks has reactualized the research on multicast, with the consequence that many of ...
2017 IFIP Networking Conference (IFIP Networking) and Workshops
IEEE Journal on Selected Areas in Communications, 2001
Concast is a network layer service that provides many-to-one channels: multiple sources send messages toward one destination, and the network delivers a single "merged" copy to that destination. As we have defined it, the service is generic but the relationship between the sent and received messages can be customized for particular applications. In this paper we describe the concast service and show how it can be implemented in a backwardcompatible manner in the Internet. We describe its use to solve a problem that has eluded scalable end-system-only solutions: collecting feedback in multicast applications. Our preliminary analysis of concast's effectiveness shows that it provides significant benefits, even with partial deployment. We argue that concast has the characteristics needed for a programmable service to be widely accepted and deployed in the Internet.
Multimedia Tools and Applications, 2014
This paper presents a novel architecture for scalable multimedia content delivery over wireless networks. The architecture takes into account both the user preferences and context in order to provide personalized contents to each user. In this way, third-party applications filter the most appropriate contents for each client in each situation. One of the key characteristics of the proposal is the scalability, which is provided, apart from the use of filtering techniques, through the transmission in multicast networks. In this sense, content delivery is carried out by means of the FLUTE (File Delivery over Unidirectional Transport) protocol, which provides reliability in unidirectional environments through different mechanisms such as AL-FEC (Application Layer-Forward Error Correction) codes, used in this paper. Another key characteristic is the context-awareness and personalization of content delivery, which is provided by means of context information, user profiles, and adaptation. The system proposed is validated through several empirical studies. Specifically, the paper presents evaluations of two types that collect objective and subjective measures. The first evaluate the efficiency of the transmission protocol, analyzing how the use of appropriate transmission parameters reduces the download time (and thus increasing the Quality of Experience), which can be minimized by using caching techniques. On the other hand, the subjective measures present a study about the user experience after testing the application and analyze the accuracy of the filtering process/strategy. Results show that using AL-FEC mechanisms produces download times until four times lower than when no protection is used. Also, results prove that there is a code rate that minimizes the download time depending on the losses and that, in general, code rates 0.7 and 0.9 provide good download times for a wide range of losses. On the other hand, subjective measures indicate a high user satisfaction (more than 80%) and a relevant degree of accuracy of the content adaption.
2005
This paper proposes a semi-reliable multicast protocol that aims to increase the quality of video streams transmitted in large-scale systems without overloading the video source and the communications network. This protocol, which is based on the IP multicast protocol and the MPEG standard, evaluates the necessity of retransmitting lost packets taking into account the capacity of the corresponding MPEG frames to improve the quality of the video stream.
Proceedings.Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies
While the advantages of multicast delivery over multiple unicast deliveries is undeniable, the deployment of the IP multicast protocol has been limited to "islands" of network domains under single administrative control. Deployment of inter-domain multicast delivery has been slow due to both technical and administrative reasons. In this paper we propose a Host Multicast Tree Protocol (HMTP) that (1) automates the interconnection of IP-multicast enabled islands and (2) provides multicast delivery to end hosts where IP multicast is not available. With HMTP, end-hosts and proxy gateways of IP multicast-enabled islands can dynamically create shared multicast trees across different islands. Members of an HMTP multicast group self-organize into an efficient, scalable and robust multicast tree. The tree structure is adjusted periodically to accommodate changes in group membership and network topology. Simulation results show that the multicast tree has low cost, and data delivered over it experiences moderately low latency.
2002
We consider the problem of delivering popular streaming media to a large number of asynchronous clients. We propose and evaluate a cache-and-relay end-system multicast approach, whereby a client joining a multicast session caches the stream, and if needed, relays that stream to neighboring clients which may join the multicast session at some later time. This cache-and-relay approach is fully distributed, scalable, and efficient in terms of network link cost.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.