Papers by Guillaume Pierre
Lecture Notes in Computer Science, 2015
Replicating Web documents at a worldwide scale can help reduce user-perceived latency and wide-ar... more Replicating Web documents at a worldwide scale can help reduce user-perceived latency and wide-area network traffic. This paper presents the design and implementation of Globule, a platform that allows Web server administrators to organize a decentralized replication service by trading Web hosting resources with each other. Globule automates all aspects of such replication: document replication, selection of the most appropriate replication strategies on a per-document basis, consistency management and transparent redirection of clients to replicas. To facilitate the transition from a non-replicated server to a replicated one, we designed Globule as a module for the Apache Web server. Therefore, converting Web documents should require no more than compiling a new module into Apache and editing a configuration file.
Many peer-to-peer systems would benefit from establishing a trust relationship between nodes prio... more Many peer-to-peer systems would benefit from establishing a trust relationship between nodes prior to any cooperation. We propose to derive such trust from a network of recommendations. We show that the error rate of our trust model can be kept low, although not totally negligible.
Web Information Systems and Technologies, 2000
This paper presents the design of a decentralized system for hosting large-scale wiki web sites l... more This paper presents the design of a decentralized system for hosting large-scale wiki web sites like Wikipedia, using a collaborative approach. Our design focuses on distributing the pages that compose the wiki across a network of nodes provided by individuals and organizations willing to collaborate in hosting the wiki. We present algorithms for placing the pages so that the capacity of the nodes is not exceeded and the load is balanced, and algorithms for routing client requests to the appropriate nodes. We also address fault tolerance and security issues.
Designing a distributed cache infrastructure to improve the Web performance for the users of a la... more Designing a distributed cache infrastructure to improve the Web performance for the users of a large-scale organization is a di cult task. It is hard to determine how many caches are required, how to size each one, where to place them and how they should cooperate.
Replicating Web documents at a worldwide scale can help reduce user-perceived latency and wide-ar... more Replicating Web documents at a worldwide scale can help reduce user-perceived latency and wide-area network traffic. This paper presents the design and implementation of Globule, a platform that allows Web server administrators to organize a decentralized replication service by trading Web hosting resources with each other. Globule automates all aspects of such replication: document replication, selection of the most appropriate replication strategies on a per-document basis, consistency management and transparent redirection of clients to replicas. To facilitate the transition from a non-replicated server to a replicated one, we designed Globule as a module for the Apache Web server. Therefore, converting Web documents should require no more than compiling a new module into Apache and editing a configuration file.
Ol'eron a... more Ol'eron allows sharing of small sets of documentsamong working groups, as well as efficient WorldWideWeb browsing in large-scale mobile environments.The target environments provide diversequality of service, bandwidth, latency, disconnections,storage space, etc. Ol'eron is designed as aset of extended cooperative proxy caches which providevarious mechanisms that deal with this diversity.Configuration is dynamic and depends on the currentenvironment, in order to provide
Caching and replication techniques can improve latency of the Web, while reducing network traffic... more Caching and replication techniques can improve latency of the Web, while reducing network traffic and balancing load among servers. However, no single strategy is optimal for replicating all documents. Depending on its access pattern, each document should use the policy that suits it best. This paper presents an architecture for adaptive replicated documents. Each adaptive document monitors its access pattern, and uses it to determine which strategy it should follow. When a change is detected in its access pattern, it re-evaluates its strategy to adapt to the new conditions. Adaptation comes at an acceptable cost considering to the benefits of per-document replication strategies.
Lecture Notes in Computer Science, 2009
Abstract. BitTorrent users and consumer ISPs are often pictured as having opposite interests, wit... more Abstract. BitTorrent users and consumer ISPs are often pictured as having opposite interests, with end-users aggressively trying to improve their download times, while ISPs throttle this traffic to reduce their costs. However, inefficiencies in both download time and ...

Improving resource provisioning of heterogeneous cloud infrastructures is an important research c... more Improving resource provisioning of heterogeneous cloud infrastructures is an important research challenge. The wide diversity of cloud-based applications and customers with different QoS requirements have recently exhibited the weaknesses of current provisioning systems. Today's cloud infrastructures provide provisioning systems that dynamically adapt the computational power of applications by adding or releasing resources. Unfortunately, these scaling systems are fairly limited: (i) They restrict themselves to a single type of resource; (ii) they are unable to fulfill QoS requirements in face of spiky workload; and (iii) they offer the same QoS level to all their customers, independent of customer preferences such as different levels of service availability and performance. In this paper, we present an autoscaling system that overcomes these limitations by exploiting heterogeneous types of resources, and by defining multiple levels of QoS requirements. The proposed system selects a resource scaling plan according to both workload and customer requirements. Our experiments conducted on both public and private infrastructures show significant reductions in QoS-level violations when faced with highly variable workloads.

Hosting a Web site at a single server creates performance and reliability issues when request loa... more Hosting a Web site at a single server creates performance and reliability issues when request load increases, availability is at stake, and, in general, when quality-of-service demands rise. A common approach to these problems is making use of a content delivery network (CDN) that supports distribution and replication of (parts of) a Web site. The nodes of such networks are dispersed across the Internet, allowing clients to be redirected to a nearest copy of a requested document, or to balance access loads among several servers. Also, if documents are replicated, availability of a site increases. The design space for constructing a CDN is large and involves decisions concerning replica placement, client redirection policies, but also decentralization. We discuss the principles of various types of distributed Web hosting platforms and show where tradeoffs need to be made when it comes to supporting robustness, flexibility, and performance.
Self-adaptive systems typically rely on a closed control loop which de-tects when the current beh... more Self-adaptive systems typically rely on a closed control loop which de-tects when the current behavior deviates too much from the optimal one, determines new optimal values for system parameters, and applies changes to the system configuration. In decentralized systems, ...
Uploads
Papers by Guillaume Pierre