Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010, Networks
…
13 pages
1 file
In this paper, we describe how to construct physical computer network topologies which can support the establishment of overlays that reduce or increase the distances between nodes. Reducing pairwise distances (i.e., compression) implies that the overlay enjoys significantly lower inter-node latencies compared to the ambient physical network; such an overlay can be used to implement a "high-performance mode" for disaster situations in which network responsiveness is of critical importance. On the other hand, increasing pairwise distances (i.e., expansion) implies that the overlay exhibits significantly higher inter-node latencies compared to the ambient physical network; such an overlay can be used to implement a brief "dilated state" in networks that have been infected by a malicious worm, where slowing down the infection spread allows greater time for antidote generation. We show that it is possible to design physical networks which support overlays whose logical link bandwidth is equal to the physical link bandwidth while providing arbitrarily high compression or expansion. We also show that it is possible to "grow" such networks over time in a scalable way, that is to say, it is possible to retain the compression/expansion properties while augmenting the network with new nodes, by making relatively small adjustments to the physical and overlay network structure.
2009
Abstract Topological worms, such as those that propagate by following links in an overlay network, have the potential to spread faster than traditional random scanning worms because they have knowledge of a subset of the overlay nodes, and choose these nodes to propagate themselves; and also because they can avoid traditional detection mechanisms.
Proceedings of the 2003 ACM workshop on Rapid Malcode - WORM'03, 2003
In this paper, we study the defensibility of large scale-free networks against malicious rapidly self-propagating code such as worms and viruses. We develop a framework to investigate the profiles of such code as it infects a large network. Based on these profiles and large-scale network percolation studies, we investigate features of networks that render them more or less defensible against worms. However, we wish to preserve mission-relevant features of the network, such as basic connectivity and resilience to normal nonmalicious outages. We aim to develop methods to help design networks that preserve critical functionality and enable more effective defenses.
Proc. IEEE ICN'04, 2004
Lately, peer-to-peer overlay networks and their ability to reflect the underlying network topology have been a focus in research. The main objective has been to reduce routing path lengths, stretched by the overlay routing process. In most solutions developed, a kind of fixed infrastructure in the form of so called landmarks or excessive message exchange are necessary to guarantee good overlay locality properties. Some solutions also deliberately give up even overlay ID distribution when constructing an overlay network with locality information. This paper presents a topology-aware overlay network based on Pastry which does not rely on any fixed set of infrastructure nodes. Additionally, the approach presented here tries to construct the overlay with only little communication overhead and still tries to distribute overlay IDs as evenly as possible. Two bootstrap strategies were developed and analyzed, both explicitly designed to work in dynamic networks.
2007
Can one build, and efficiently use, networks of arbitrary size and topology using a "standard" node whose resources, in terms of memory and reliability, do not need to scale up with the complexity and size of the network? This thesis addresses two important aspects of this question. The first is whether one can achieve efficient connectivity despite the presence of a constant probability of faults per node/link. Efficient connectivity means (informally) having every pair of regions connected by a constant fraction of the independent, entirely non-faulty paths that would be present if the entire network were fault free even at distances where each path has only a vanishingly small probability of being fault-free. The answer is yes, as long as some very mild topological conditions on the high level structure of the network are metinformally, if the network is not too "thin" and if it does not contain too many large "holes". The results go against some established "empyrical wisdom" in the networking community. The second issue addressed by this thesis is whether one can route efficiently on a network of arbitrarly size and topology using only a constant number c of bits/node (even if c is less than the logarithm of the network's size!). Routing efficiently means (informally) that message delivery should only stretch the delivery path by a constant factor. The answer again is yes, as long as the volume of the network grows only polynomially with its radius (otherwise, we run into established lower bounds). This effectively captures every network one may build in a universe (like our own) with finite dimensionality using links of a fixed, maximum length and nodes with a fixed, minimum volume. The results extend the current results for compact routing, allowing one to route efficiently on a much larger class of networks than had previously been known, with many fewer bits.
Parallel and Distributed Systems, IEEE …, 2009
In unstructured peer-to-peer (P2P) networks, the overlay topology (or connectivity graph) among peers is a crucial component in addition to the peer/data organization and search. Topological characteristics have profound impact on the efficiency of a search on such unstructured P2P networks, as well as other networks. A key limitation of scale-free (power-law) topologies is the high load (i.e., high degree) on a very few number of hub nodes. In a typical unstructured P2P network, peers are not willing to maintain high degrees/loads as they may not want to store a large number of entries for construction of the overlay topology. Therefore, to achieve fairness and practicality among all peers, hard cutoffs on the number of entries are imposed by the individual peers, which limits scale-freeness of the overall topology, hence limited scale-free networks. Thus, it is expected that the efficiency of the flooding search reduces as the size of the hard cutoff does. We investigate the construction of scale-free topologies with hard cutoffs (i.e., there are not any major hubs) and the effect of these hard cutoffs on the search efficiency. Interestingly, we observe that the efficiency of normalized flooding and random walk search algorithms increases as the hard cutoff decreases.
ACM SIGCOMM Computer Communication Review, 2002
This thesis describes the design, implementation, and evaluation of a Resilient Overlay Network (RON), an architecture that allows end-to-end communication across the wide-area Internet to detect and recover from path outages and periods of degraded performance within several seconds. A RON is an application-layer overlay on top of the existing Internet routing substrate. The overlay nodes monitor the liveness and quality of the Internet paths among themselves, and they use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics. We demonstrate the potential benefits of RON by deploying and measuring a working RON with nodes at thirteen sites scattered widely over the Internet. Over a 71-hour sampling period in March 2001, there were 32 significant outages lasting over thirty minutes each, between the 156 communicating pairs of RON nodes. RON's routing mechanism was able to detect and recover around all of them, showing that there is, in fact, physical path redundancy in the underlying Internet in many cases. RONs are also able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 1% of the transfers doubled their TCP throughput and 5% of our transfers saw their loss rate reduced by 5% in absolute terms. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems.
Security and Communication Networks, 2012
In this paper, we build on a recent worm propagation stochastic model, in which random effects during worm spreading were modeled by means of a stochastic differential equation. On the basis of this model, we introduce the notion of the critical size of a network, which is the least size of a network that needs to be monitored, in order to correctly project the behavior of a worm in substantially larger networks. We provide a method for the theoretical estimation of the critical size of a network in respect to a worm with specific characteristics. Our motivation is the requirement in real systems to balance the needs for accuracy (i.e., monitoring a network of a sufficient size in order to reduce false alarms) and performance (i.e., monitoring a small-scale network to reduce complexity). In addition, we run simulation experiments in order to experimentally validate our arguments. Finally, based on notion of critical-sized networks, we propose a logical framework for a distributed early warning system against unknown and fast-spreading worms. In the proposed framework, propagation parameters of an early detected worm are estimated in real time by studying a critical-sized network. In this way, security is enhanced as estimations generated by a critical-sized network may help large-scale networks to respond faster to new worm threats.
2012
Abstract Gossip, or epidemic, protocols have emerged as a highly scalable and resilient approach to implement several application level services such as reliable multicast, data aggregation, publish-subscribe, among others. All these protocols organize nodes in an unstructured random overlay network. In many cases, it is interesting to bias the random overlay in order to optimize some efficiency criteria, for instance, to reduce the stretch of the overlay routing.
2009
The Internet has scaled massively over the past 15 years to extend to billions of users. These users increasingly require extensive applications and capabilities from the Internet, such as Quality of Service (QoS) optimized paths between end hosts. When default Internet paths may not meet their requirements adequately, there is a need to facilitate the discovery of such QoS optimized paths. Fortunately, even though the route offered by the Internet may not work (to the required level of performance), often there exist alternate routes that do work. When the direct Internet path between two Internet hosts for instance is sub-optimal (according to specific user defined criterion), there is a possibility that the direct paths of both to a third host may not be suffering from the same problem owing to path disjointness. Overlay Networks facilitate the discovery of such composite alternate paths through third party hosts. To discover such alternate paths, overlay hosts regularly monitor both Internet path quality and choose better alternate paths via other hosts. Such measurements are costly and pose scalability problems for large overlay networks. This thesis asserts and shows that these overheads could be lowered substantially if the network layer path information between overlay hosts could be obtained, which facilitates selection of disjoint paths. This thesis further demonstrates that obtaining such network layer path information is very challenging. As opposed to the path monitoring which only requires cooperation of overlay hosts, disjoint path selection depends on the accuracy of information about the underlay, which is out of the domain of control of the overlay and so may contain inaccuracies. This thesis investigates how such information could be gleaned at different granularities for optimal tradeoffs between spatial and/or temporal methods for selection of alternate paths. The main contributions of this thesis are: (i) investigation of scalable techniques to facilitate alternate path computation using network layer path information; (ii) a review of the realistic performance gains achievable using such alternate paths; and (iii) investigation of techniques for revealing the presence of incorrect network layer path information, proposal of new techniques for its removal.
Proc. of the Current Trends in Informatics: The 11th Panhellenic Conf. on Informatics (PCI 2007). Athens: New Technologies Publications, 2007
Due to ethical and practical limitations, simulations are the de facto approach to measure and predict the propagation of malicious code on the Internet. A crucial part of every simulation is the network graph that is used to perform the experiments. Though recent evidence brought to light the nature of many technological and socio-technical networks such as the web links, the physical connectivity of the Internet and the e-mail correspondents, we argue that the interpretation of these findings has to be strongly ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2008 Eighth International Conference on Peer-to-Peer Computing, 2008
IEEE INFOCOM 2004, 2004
Proceedings of the LinkKDD workshop at the …, 2004