Papers by Francisco Brasileiro

J. Integr. Des. Process. Sci., 2014
The foundation of cloud computing is to leverage virtualised physical computing resources by offe... more The foundation of cloud computing is to leverage virtualised physical computing resources by offering them on demand as elastic infrastructure services. Processing, storage and networking resources can thus be perceived and traded comparable to existing utility services. A restriction of conventional infrastructure services is the missing flexibility for scenarios in which coarse-grained provisioning from a single provider is not sufficient anymore. Cloud federation and multiplexing are recent techniques to overcome the provider binding and lock-in limitation. In this article, we give a brief overview about this research direction and then present a new variety, nested clouds, which helps overcoming the coarse-grained virtualisation, billing and trading limitations. In particular, we contribute a design for the Nested Cloud virtual machine and the corresponding spot market Highly-Virtualising Cloud Resource Broker and report on our experience with these systems. We argue that nested...
Future Generation Computer Systems, 2020
This is a PDF file of an article that has undergone enhancements after acceptance, such as the ad... more This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Proceedings of the 2013 conference on Computer supported cooperative work, 2013
Q&A sites currently enable large numbers of contributors to collectively build valuable knowledge... more Q&A sites currently enable large numbers of contributors to collectively build valuable knowledge bases. Naturally, these sites are the product of contributors acting in different ways-creating questions, answers or comments and voting in these-, contributing in diverse amounts, and creating content of varying quality. This paper advances present knowledge about Q&A sites using a multifaceted view of contributors that accounts for diversity of behavior, motivation and expertise to characterize their profiles in five sites. This characterization resulted in the definition of ten behavioral profiles that group users according to the quality and quantity of their contributions. Using these profiles, we find that the five sites have remarkably similar distributions of contributor profiles. We also conduct a longitudinal study of contributor profiles in one of the sites, identifying common profile transitions, and finding that although users change profiles with some frequency, the site composition is mostly stable over time.
12th IFIP/IEEE International Symposium on Integrated Network Management (IM 2011) and Workshops, 2011
AbstractThe cloud computing market has emerged as an alternative for the provisioning of resourc... more AbstractThe cloud computing market has emerged as an alternative for the provisioning of resources on a pay-as-yougo basis. This flexibility potentially allows clients of cloud computing solutions to reduce the total cost of ownership of their Information Technology ...

ABSTRACT This paper approaches three basic aspects of the life of a computing professional, focus... more ABSTRACT This paper approaches three basic aspects of the life of a computing professional, focusing particularly on the are of programming: i) its education; ii) its practice in the market place; and iii) the mechanisms used for updating its technical knowledge. The results of two surveys carried out in the World Wide Web are presented. The first survey named "Teaching computer programming in the Brazilian Universities", was answered by lecturers of the area. The second survey, named "Programming practices of programmers", was directed to the current professionals graduated from courses in computer science and other related areas. The results of the two surveys are discussed and co-related, and some strategies are presented aiming at contributing to the better education of the professional, to the adoption of an efficient behaviour of the graduate in face of the dynamics of the area, and to the incorporation of "the learning process" as an important element in its professional life.

2012 ACM/IEEE 13th International Conference on Grid Computing, 2012
ABSTRACT A computational grid is a large scale federated infrastructure where users execute sever... more ABSTRACT A computational grid is a large scale federated infrastructure where users execute several types of applications with different submission rates. On the evaluation of solutions for grids, there are not much effort on using realistic workloads for experiments, and most of the time users' activities and applications are not well represented. In this work, we propose a user-based grid workload model which is based on clustering users according to their behaviour in the system and their applications. The results show that according to a new metric proposed, the model quality increases when using clustering and extracting models for the group of users with similar behaviour. Moreover, we compare our user-based modelling with a state-of-the-art system-based modelling approach. We show that by using our user-based model the system load can be easily changed by varying the number of users in the grid, creating different evaluation scenarios without affecting individual users' behaviour. On the other hand, varying the number of users in the system-based model does not affect the system load and change the way individual user's behave on the system, which can result in unrealistic users' activities.
Lecture Notes in Computer Science, 2009
2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012), 2012
This paper presents UnaCloud, an opportunistic Infraestructure as a Service implementation orient... more This paper presents UnaCloud, an opportunistic Infraestructure as a Service implementation oriented to academic and research institutions, where the IaaS model is supported through the opportunistic use of idle computing resources available in the institution campus, providing researchers with significant and low cost computing capabilities. Also presented are the existing perspectives for current and future work on this project, related
Computer Communications and Networks, 2011
... and L. Sampaio Departamento de Sistemas e Computação, Laboratório de Sistemas Distribuídos, U... more ... and L. Sampaio Departamento de Sistemas e Computação, Laboratório de Sistemas Distribuídos, Universidade Federal de Campina Grande, Campina Grande, Paraíba, Brazil e-mail: [email protected]; [email protected]; [email protected]; [email protected] ...

2014 Brazilian Symposium on Computer Networks and Distributed Systems, 2014
ABSTRACT Human computation systems are distributed systems in which the processors are human bein... more ABSTRACT Human computation systems are distributed systems in which the processors are human beings, called workers. In such systems, task replication has been used as a way to obtain results redundancy and quality. The level of replication is usually defined before the tasks start executing. This approach, however, generates the problem of defining the suitable task replication level. If the level of replication is overestimated, it is used an excessive amount of workers and, therefore, there is an increase in the cost of executing all tasks. On the other hand, if the level of replication is underestimated, a desired level of quality cannot be achieved. This work proposes an adaptive replication strategy that defines the level of replication for each task during execution time. The strategy is based on estimations of the degree of difficulty of tasks and the degree of credibility of workers. Results from simulations using data from two real human computation applications show that, compared to non-adaptive task replication, the proposed strategy reduces the number of replicas substantially, without compromising the accuracy of the obtained answers.

2nd IEEE Latin American Conference on Cloud Computing and Communications, 2013
ABSTRACT From the perspective of a typical cloud user, clouds are a source of unlimited computati... more ABSTRACT From the perspective of a typical cloud user, clouds are a source of unlimited computational resources that can be contracted as needed. As a consequence, users are relieved from the task of planning the capacity of their infrastructure. In practice, the problem of capacity planning has simply shifted from the cloud users to the cloud providers. The main challenge of cloud providers is, therefore, to reduce as much as possible the idleness of their infrastructure, at the same time that they appropriately support their variable workload. One approach used by public cloud providers to reduce the infrastructure idleness is to offer resources with a degraded quality of service. However, in private clouds, the idea of exploiting idle capacity has been little explored. In this demo, we present a system that opportunistically explores idle capacity in private cloud providers. The best effort service provided by resources exploited opportunistically can be efficiently used by users that need to run bag-of-task jobs as fast as possible-a workload that is becoming more and more common nowadays.
Proceedings of the 14th Brazilian Symposium on Multimedia and the Web, 2008
Computing platforms with voluntary approach such as those deployed for the SETI@home project have... more Computing platforms with voluntary approach such as those deployed for the SETI@home project have proven that it is possible to harvest massive amounts of unused bandwidth and computing power available from computers connected to the Internet. In this work we present the TVGrid architecture to explore this idea in the context of a digital TV network. In the proposed architecture,
2012 ACM/IEEE 13th International Conference on Grid Computing, 2012
ABSTRACT OddCI is a new architecture for distributed computing that is, at same time, flexible an... more ABSTRACT OddCI is a new architecture for distributed computing that is, at same time, flexible and highly scalable. Previous works have demonstrated the theoretical feasibility of implementing the proposed architecture on a digital television (DTV) network, but without taking into consideration any practical issues or details. This paper describes the implementation of a proof of concept for the architecture, called OddCI-Ginga, using a testbed based on DTV receivers compatible with the Brazilian DTV System. Performance tests using real broadcast transmission and the return channel demonstrate the feasibility of the model and its usefulness as a platform for efficient and scalable distributed computing.

ACM SIGOPS Operating Systems Review, 2008
The high churn and low bandwidth characteristics of peer-to-peer (P2P) backup systems make recove... more The high churn and low bandwidth characteristics of peer-to-peer (P2P) backup systems make recovery a time-consuming activity that increases the system's outage. This is especially disturbing from the user's perspective, because during outage the user is prevented from carrying out useful work. Nevertheless, at any given time, a user typically requires only a small number of her files to continue working. If the backup system is able to quickly recover these files, then the system's outage can be greatly reduced, even if a large portion of the data lost is still being recovered. In this paper, we evaluate the use of a file system working set model to support efficient recovery of a P2P backup system. By exploiting a simple LRU-like working set model, we have designed a recovery mechanism that substantially reduces outage and allows the user to return to work more quickly. The simulations we have performed show that even this simple model is able to reduce the outage by a...

Journal of Grid Computing, 2012
ABSTRACT Opportunistic peer-to-peer (P2P) Grids are distributed computing infrastructures that ha... more ABSTRACT Opportunistic peer-to-peer (P2P) Grids are distributed computing infrastructures that harvest the idle computing cycles of computing resources geographically distributed. In these Grids, the demand for resources is typically bursty. During bursts of resource demand, many Grid resources are required, but on other occasions they may remain idle for long periods of time. If the resources are kept powered on even when they are neither processing their owners’ workload nor Grid jobs, their exploitation is not efficient in terms of energy consumption. One way to reduce the energy consumed in these idleness periods is to place the computers that form the Grid in a “sleeping” state which consumes less energy. In Grid computing, this strategy introduces a tradeoff between the benefit of energy saving and the associated costs in terms of increasing the job response time, also known as makespan, and reducing the hard disks’ lifetime. To mitigate these costs, it is usually introduced a timeout policy together with the sleeping state, which tries to avoid useless state transitions. In this work, we use simulations to analyze the potential of using sleeping states to save energy in each site of a P2P Grid. Our results show that sleeping states can save energy with low associated impact on jobs’ makespan and hard disks’ lifetime. Furthermore, the best sleeping strategy to be used depends on the characteristics of each individual site, thus, each site should be configured to use the sleeping strategy that best fits its characteristics. Finally, differently from other kinds of Grid infrastructures, P2P Grids can place a machine in sleeping mode as soon as it becomes idle, i.e. it is not necessary to use an aggressive timeout policy. This allows increases on the Grid’s energy saving without impacting significantly the jobs’ makespan and the disks’ lifetime.

Future Generation Computer Systems, 2013
ABSTRACT Bag-of-tasks (BoT) is an important class of scientific applications. These applications ... more ABSTRACT Bag-of-tasks (BoT) is an important class of scientific applications. These applications are typically comprised of a very large number of tasks that can be executed in parallel in an independent way. Due to its cost associativity property, a public cloud computing environment is, theoretically, the ideal platform to execute BoT applications, since it could allow them to be executed as fast as possible, yet without implying any extra costs for the rapid turnaround achieved. Unfortunately, current public cloud computing providers impose strict limits on the amount of resources that a single user can simultaneously acquire, substantially increasing the response time of large BoT applications. In this paper we analyze the reasons why traditional providers need to impose such a limit. We show that increases in the limit imposed have a severe impact on the profit achieved by providers. This leads to the conclusion that new approaches to deploy cloud computing services are required to properly serve BoT applications.
Uploads
Papers by Francisco Brasileiro