Papers by Luís Henrique Maciel Kosmalski Costa
Computer Networks, Feb 1, 2012
This work proposes the Wireless-mesh-network Proactive Routing (WPR) protocol for wireless mesh n... more This work proposes the Wireless-mesh-network Proactive Routing (WPR) protocol for wireless mesh networks, which are typically employed to provide backhaul access. WPR computes routes based on link states and, unlike current routing protocols, it uses two algorithms to improve communications in wireless mesh networks taking advantage of traffic concentration on links close to the network gateways. WPR introduces a controlled-flooding algorithm to reduce routing control overhead by considering the network topology similar to a tree. The main goal is to improve overall efficiency by saving network resources and avoiding network bottlenecks. In addition, WPR avoids redundant messages by selecting a subset of one-hop neighbors, the AMPR

Data centers rely on virtualization to provide different services over a shared infrastructure. T... more Data centers rely on virtualization to provide different services over a shared infrastructure. The placement of the different services and tasks in the physical machines is crucial for the performance of the whole system. A misplaced service can overload some network links, lead to congestion, or even connection disruptions. On the other hand, virtual machine migration allows reallocating services and changing the traffic matrix, leading to more efficient use of bandwidth. In this paper, we propose a Virtual Machine Placement (VMP) algorithm to (re)allocate virtual machines in the data center servers, based on the current traffic matrix, CPU, and memory usage. Analyzing the formation of community patterns in terms of traffic using graph theory, we are able to find virtual machines that are correlated because they exchange high amount of data. Those virtual machines are aggregated and allocated to servers as close as possible to each other, reducing traffic congestion. Our simulation results show that VMP was able to improve the traffic distribution. In some specific cases we were able to reduce 80% of the core traffic, concentrating it at the edge of the network.

IEEE Communications Surveys and Tutorials, 2014
We review the main challenges and survey promising techniques for network interconnection in the ... more We review the main challenges and survey promising techniques for network interconnection in the Internet of the future. To this end, we first discuss the shortcomings of the Internet's current model. Among them, many are consequence of unforeseen demands on the original Internet design such as: mobility, multihoming, multipath, and network scalability. These challenges have attracted significant research efforts in the latest years because of both their relevance and complexity. In this survey, for the sake of completeness, we cover several new protocols for network interconnection spanning both incremental deployments (evolutionary approach) and radical proposals to redesign the Internet from scratch (clean-slate approach). We focus on specific proposals for future internetworking such as: Loc/ID split, flat routing, network mobility, multipath and content-based routing, path programmability, and Internet scalability. Although there is no consensus on the future internetworking approach, requirements such as security, scalability, and incremental deployment are often considered.

Computer Networks, Dec 1, 2015
A key strategy to build disaster-resilient clouds is to employ backups of virtual machines in a g... more A key strategy to build disaster-resilient clouds is to employ backups of virtual machines in a geo-distributed infrastructure. Today, the continuous and acknowledged replication of virtual machines in different servers is a service provided by different hypervisors. This strategy guarantees that the virtual machines will have no loss of disk and memory content if a disaster occurs, at a cost of strict bandwidth and latency requirements. Considering this kind of service, in this work, we propose an optimization problem to place servers in a wide area network. The goal is to guarantee that backup machines do not fail at the same time as their primary counterparts. In addition, by using virtualization, we also aim to reduce the amount of backup servers required. The optimal results, achieved in real topologies, reduce the number of backup servers by at least 40%. Moreover, this work highlights several characteristics of the backup service according to the employed network, such as the fulfillment of latency requirements. 1

Springer eBooks, Nov 29, 2006
A wireless ad hoc network is a collection of wireless nodes that can dynamically self-organize in... more A wireless ad hoc network is a collection of wireless nodes that can dynamically self-organize into an arbitrary and temporary topology to form a network without necessarily using any pre-existing infrastructure. These characteristics make ad hoc networks well suited for military activities, emergency operations, and disaster recoveries. Nevertheless, as electronic devices are getting smaller, cheaper, and more powerful, the mobile market is rapidly growing and, as a consequence, the need of seamlessly internetworking people and devices becomes mandatory. New wireless technologies enable easy deployment of commercial applications for ad hoc networks. The design of an ad hoc network has to take into account several interesting and difficult problems due to noisy, limited-range, and insecure wireless transmissions added to mobility and energy constraints. This paper presents an overview of issues related to medium access control (MAC), routing, and transport in wireless ad hoc networks and techniques proposed to improve the performance of protocols. Research activities and problems requiring further work are also presented. Finally, the paper presents a project concerning an ad hoc network to easily deploy Internet services on low-income habitations fostering digital inclusion.
The Twentieth International Offshore and Polar Engineering Conference, Jun 20, 2010
The traditional approach of ocean data acquisition, based on the deployment of battery operated s... more The traditional approach of ocean data acquisition, based on the deployment of battery operated stations with sensors for data recording during some programmed time for later recovery, has several drawbacks that may be overcome with the use of Underwater Sensor Networks (UWSN). In this work we investigate the feasibility of UWSN for deep-ocean data acquisition. The limitations of acoustic channel are discussed and taken into account to analyze the feasibility of this class of network for one important application, deep-ocean current monitoring. Also, we propose a method for UWSN synchronization based on the tide variations.
This paper proposes a deepwater monitoring system built with sensors distributed over the subsea ... more This paper proposes a deepwater monitoring system built with sensors distributed over the subsea pipelines which are responsible for transportation of oil production. Data transmission is undertaken by underwater acoustic modems installed on the sensors and vessels used for logistic support of the oil exploration. However, the vessels may not be within sensor range at all times, requiring the use of DTN (Delay/Disruption Tolerant Network). This work investigates the routing protocols Prophet and Epidemic, analyzing system behavior using ONE (Opportunistic Network Environment) simulator and compatible in the Campos Basin scenario in the Brazilian oil exploration area.
Vehicular Communications, 2016
Most cities in the Amazon have no data communication infrastructure, and rivers are most of the t... more Most cities in the Amazon have no data communication infrastructure, and rivers are most of the time the only access mode to connect small cities to urban centers. In this paper, we investigate the deployment of vehicular ad hoc networks (VANETs) formed by boats along the rivers of the Amazon as * Corresponding author.

Computer Networks, 2015
A key strategy to build disaster-resilient clouds is to employ backups of virtual machines in a g... more A key strategy to build disaster-resilient clouds is to employ backups of virtual machines in a geo-distributed infrastructure. Today, the continuous and acknowledged replication of virtual machines in different servers is a service provided by different hypervisors. This strategy guarantees that the virtual machines will have no loss of disk and memory content if a disaster occurs, at a cost of strict bandwidth and latency requirements. Considering this kind of service, in this work, we propose an optimization problem to place servers in a wide area network. The goal is to guarantee that backup machines do not fail at the same time as their primary counterparts. In addition, by using virtualization, we also aim to reduce the amount of backup servers required. The optimal results, achieved in real topologies, reduce the number of backup servers by at least 40%. Moreover, this work highlights several characteristics of the backup service according to the employed network, such as the fulfillment of latency requirements. 1
Multicast Listener Discovery Version 2 (MLDv2) for IPv6 Status of this Memo This document specifi... more Multicast Listener Discovery Version 2 (MLDv2) for IPv6 Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited.

2014 IEEE Global Communications Conference, 2014
A hot topic in data center design is to envision geo-distributed architectures spanning a few sit... more A hot topic in data center design is to envision geo-distributed architectures spanning a few sites across wide area networks, allowing more proximity to the end users and higher survivability, defined as the capacity of a system to operate after failures. As a shortcoming, this approach is subject to an increase of latency between servers, caused by their geographic distances. In this paper, we address the trade-off between latency and survivability in geo-distributed data centers, through the formulation of an optimization problem. Simulations considering realistic scenarios show that the latency increase is significant only in the case of very strong survivability requirements, whereas it is negligible for moderate survivability requirements. For instance, the worst-case latency is less than 4 ms when guaranteeing that 80% of the servers are available after a failure, in a network where the latency could be up to 33 ms. 1
2011 IEEE Symposium on Computers and Communications (ISCC), 2011
As of today, many routing protocols for wireless mesh networks have been proposed. Nevertheless, ... more As of today, many routing protocols for wireless mesh networks have been proposed. Nevertheless, quite a few take the high loss rate of control packets into account. This work analyzes the problem of consistent routing information among wireless network nodes. To accomplish this, we propose a metric to evaluate the level of inconsistency among routing tables. Our experimental analysis demonstrates that the high loss rates seen in indoor environments negatively influence route computation. In addition, we demonstrate that the high network dynamics leads to severe instability in next hop selection. Results show that the effect of loss is significant and that the simple manipulation of routing protocol configuration parameters may be not enough to cope with the problem.
Communications: Wireless in Developing Countries and Networks of the Future, 2010
Migration is an important feature for network virtualization because it allows the reallocation o... more Migration is an important feature for network virtualization because it allows the reallocation of virtual resources over the physical resources. In this paper, we investigate the characteristics of different migration models, according to their virtualization platforms. We show the main advantages and limitations of using the migration mechanisms provided by Xen and OpenFlow platforms. We also propose a migration model for Xen, using data and control plane separation, which outperforms the Xen standard migration. We developed two prototypes, using Xen and OpenFlow, and we performed evaluation experiments to measure the impact of the network migration on traffic forwarding.

2011 International Conference on the Network of the Future, 2011
Network testbeds strongly rely on virtualization that allows the simultaneous execution of multip... more Network testbeds strongly rely on virtualization that allows the simultaneous execution of multiple protocol stacks but also increases the management and control tasks. This paper presents a system to control and manage virtual networks based on the Xen platform. The goal of the proposed system is to assist network administrators to perform decision making in this challenging virtualized environment. The system management and control tasks consist of defining virtual networks, turning on, turning off, migrating virtual routers, and monitoring the virtual networks within few mouse clicks thanks to a user-friendly graphical interface. The administrator can also perform highlevel decisions, such as redefining the virtual network topology by using the plane-separation and loss-free live migration functionality, or saving energy by shutting down physical routers. Our performance tests assure the system has low response time; for instance, less than 3 minutes to create 4-node virtual networks.

Lecture Notes in Computer Science, 2000
Quality of Service (QoS) based routing provides QoS guarantees to multimedia applications and an ... more Quality of Service (QoS) based routing provides QoS guarantees to multimedia applications and an efficient utilization of the network resources. Nevertheless, QoS routing is likely to be a costly process that does not scale when the number of nodes increases. Thus, the routing algorithm must be simple and a class-of-service routing is an alternative to provide QoS guarantees instead of per-flow routing. This paper proposes and analyzes the performance of a distance-vector QoS routing algorithm that takes into account three metrics: propagation delay, available bandwidth, and loss probability. Classes of service and metriccombination are used to turn the algorithm scalable and as complex as a two-metric one. The processing requirements of the proposed algorithm and those of an optimal approach are compared and the results show an improvement up to 50%.

Lecture Notes in Computer Science, 2001
Despite its obvious suitability for distributed multimedia applications, multicasting has not yet... more Despite its obvious suitability for distributed multimedia applications, multicasting has not yet found widespread application. Having analyzed shortcomings of today's approaches, we devise in the GCAP project a new endto-end transport architecture for multimedia multicasting that supports partial order and partial reliability. In this paper, we argue that, at the network layer, single-source multicasting (PIM-SSM) should be chosen. Consequently, our Monomedia Multicast protocol provides, along with reliability and QoS monitoring functionality, an ALM based multicast solution referred to as TBCP (Tree Building Control Protocol), to be used as back channel for SSM, e.g. for retransmission requests. On top of the Monomedia protocol, our Multimedia Multicast protocol handles multimedia sessions composed of multiple monomedia connections: The FPTP (Fully Programmable Transport Protocol) allows applications to specify, through its API, the (global) synchronization and (individual) reliability requirements within a multimedia session. Our group management approach is focused on group integrity.
2008 1st IFIP Wireless Days, 2008
This paper presents the design of a plug-in for the Optimized Link State Routing (OLSR) protocol ... more This paper presents the design of a plug-in for the Optimized Link State Routing (OLSR) protocol with the Expected Transmission Time (ETT) metric and experiments in an indoor testbed. The ETT metric is implemented as a plug-in, keeping portability and facilitating its deployment on operational networks. Our design identifies important implementation issues. Additionally, we run experiments in an indoor testbed to verify the performance of our ETT plug-in. Our results show that the ETT metric has the lowest packet loss rate and the lowest round trip time among the analyzed metrics, because it reproduces link quality conditions and also takes into account physical transmission rates.

2012 Global Information Infrastructure and Networking Symposium (GIIS), 2012
Data centers rely on virtualization to provide different services over a shared infrastructure. T... more Data centers rely on virtualization to provide different services over a shared infrastructure. The placement of the different services and tasks in the physical machines is crucial for the performance of the whole system. A misplaced service can overload some network links, lead to congestion, or even connection disruptions. On the other hand, virtual machine migration allows reallocating services and changing the traffic matrix, leading to more efficient use of bandwidth. In this paper, we propose a Virtual Machine Placement (VMP) algorithm to (re)allocate virtual machines in the data center servers, based on the current traffic matrix, CPU, and memory usage. Analyzing the formation of community patterns in terms of traffic using graph theory, we are able to find virtual machines that are correlated because they exchange high amount of data. Those virtual machines are aggregated and allocated to servers as close as possible to each other, reducing traffic congestion. Our simulation results show that VMP was able to improve the traffic distribution. In some specific cases we were able to reduce 80% of the core traffic, concentrating it at the edge of the network.

2012 IEEE Global Communications Conference (GLOBECOM), 2012
The network infrastructure plays an important role for datacenter applications. Therefore, datace... more The network infrastructure plays an important role for datacenter applications. Therefore, datacenter network architectures are designed with three main goals: bandwidth, latency and reliability. This work focuses on the last goal and provides a comparative analysis of the topologies of prevalent datacenter architectures. Those architectures use a network based only on switches or a hybrid scheme of servers and switches to perform packet forwarding. We analyze failures of the main networking elements (link, server, and switch) to evaluate the tradeoffs of the different datacenter topologies. Considering only the network topology, our analysis provides a baseline study to the choice or design of a datacenter network with regard to reliability. Our results show that, as the number of failures increases, the considered hybrid topologies can substantially increase the path length, whereas servers on the switch-only topology tend to disconnect more quickly from the main network.
Wireless Networks, 2004
Ad hoc routing protocols that use broadcast for route discovery may be inefficient if the path be... more Ad hoc routing protocols that use broadcast for route discovery may be inefficient if the path between any source-destination pair is frequently broken. We propose and evaluate a simple mechanism that allows fast route repair in on demand ad hoc routing protocols. We apply our proposal to the Ad hoc On-demand Distance Vector (AODV) routing protocol. The proposed system is based on the Controlled Flooding (CF) framework, where alternative routes are established around the main original path between source-destination pairs. With alternative routing, data packets are forwarded through a secondary path without requiring the source to re-flood the whole network, as may be the case in AODV. We are interested in one-level alternative routing. We show that our proposal reduces the connection disruption probability as well as the frequency of broadcasts.
Uploads
Papers by Luís Henrique Maciel Kosmalski Costa