Papers by geraldine araujo

Revista Brasileira de Medicina do Esporte, 2007
O objetivo do presente estudo foi investigar as respostas fisiológicas ao exercício agudo em rato... more O objetivo do presente estudo foi investigar as respostas fisiológicas ao exercício agudo em ratos Wistar obesos, tratados com metformina. Os animais receberam injeção subcutânea de glutamato monossódico (4mg/g peso corporal), para indução da obesidade. Os animais foram divididos em 4 grupos, conforme o tratamento recebido: obesos controles (OC); obesos metformina (OM); obesos controles exercitados (OCE) e obesos metformina exercitados (OME). Foram analisados, antes e após uma sessão de exercício agudo: glicose sérica (mg/dL), triglicerídeos (g/100g), colesterol total (mg/dL) e hematócrito (%). Os valores de glicose sérica e colesterol total foram reduzidos significativamente no grupo controle exercitado (OCE - 68,4 ± 14,7 e 70,8 ± 18,3) em comparação ao grupo controle sedentário (OC - 83,6 ± 12,8 e 91,3 ± 9,6). A administração de metformina isoladamente diminuiu a concentração de glicose de 83,6 ± 12,8 (OC) para 70,8 ± 5,9 (OM). Por outro lado, a associação de metformina com exercí...
Proceedings Design, Automation and Test in Europe Conference and Exhibition
This paper presents an environment based on Sys-temC for architecture specification of programmab... more This paper presents an environment based on Sys-temC for architecture specification of programmable systems. Making use of the new architecture description language ArchC, able to capture the processor description as well as the memory subsystem configuration, this environment offers support for system-level specification, intended for platform-based design. As a case study, it is presented the memory architecture exploration for a simple image processing application, yet a more robust environment evaluation is performed through the execution of some real-world benchmarks.
Proceedings. 15th Symposium on Computer Architecture and High Performance Computing
This paper presents the cache configuration exploration of a programmable system, in order to fin... more This paper presents the cache configuration exploration of a programmable system, in order to find the best matching between the architecture and a given application. Here, programmable systems composed by processor and memories may be rapidly simulated making use of ArchC, an Architecture Description Language (ADL) based on Sys-temC. Initially designed to model processor architectures, ArchC was extended to support a more detailed description of the memory subsystem, allowing the design space exploration of the whole programmable system. As an example, it is shown an image processing application, running on a SPARC-V8 processor-based architecture, which had its memory organization adjusted to minimize cache misses.

Proceedings 12th International Symposium on System Synthesis
Decreasing the program size has become an important goal in the design of embedded systems target... more Decreasing the program size has become an important goal in the design of embedded systems target to mass production. This problem has led to a number of efforts aimed at designing processors with shorter instruction formats (e.g. ARM Thumb and MIPS16), or that can execute compressed code (e.g. IBM CodePack PowerPC). Much of this work has been directed towards RISC architectures though. This paper proposes a solution to the problem of executing compressed code on embedded DSPs. The experimental results reveal an average compression ratio of 75% for typical DSP programs running on the TMS320C25 processor. This number includes the size of the decompression engine. Decompression is performed by a state machine that translates codewords into instruction sequences during program execution. The decompression engine is synthesized using the AMS standard cell library and a 0.6m 5V technology. Gate level simulation of the decompression engine reveals minimum operation frequencies of 150MHz.

Proceedings 20th IEEE International Parallel & Distributed Processing Symposium, 2006
Fast reconfiguration is a mandatory feature for reconfigurable computing architectures. Research ... more Fast reconfiguration is a mandatory feature for reconfigurable computing architectures. Research in this area has been increasingly focusing on new reconfiguration techniques that can sustain the architecture performance and to allow the simultaneous execution, at the same stage, of configuration and computation tasks. In this context, this paper presents a new dynamic reconfiguration technique, based on a configuration cache, that tackles this challenge by configuring and executing operations on functional units during the execution stage. This approach is implemented in a pipelined reconfigurable multiple-issue architecture called 2D-VLIW. Our dynamic reconfiguration technique takes advantage of the 2D-VLIW pipelined execution by starting reconfiguration concurrently to activities like reading operand registers and executing operations.
I Survived HIV and Now I'm Going to Die of This?": Perspectives from LGBTQ+ Elders of Color on Living through Two Pandemics
Journal of Black Sexuality and Relationships, 2022

Microelectronics Journal, 2003
Efficient address register allocation has been shown to be a central problem in code generation f... more Efficient address register allocation has been shown to be a central problem in code generation for processors with restricted addressing modes. This paper extends previous work on Global Array Reference Allocation (GARA), the problem of allocating address registers to array references in loops. It describes two heuristics to the problem, presenting experimental data to support them. In addition, it proposes an approach to solve GARA optimally which, albeit computationally exponential, is useful to measure the efficiency of other methods. Experimental results, using the MediaBench benchmark and profiling information, reveal that the proposed heuristics can solve the majority of the benchmark loops near optimality in polynomial-time. A substantial execution time speedup is reported for the benchmark programs, after compiled with the original and the optimized versions of GCC. q 2003 Published by Elsevier Science Ltd.

Surface & Coatings Technology, 2009
OBJECTIVES: To estimate the resource use and direct medical costs associated with inpatient treat... more OBJECTIVES: To estimate the resource use and direct medical costs associated with inpatient treatment in children diagnosed with pneumococcal disease in a Brazilian public hospital and compare the average cost with the mean payments reimbursed by the Brazilian Public Health Care System. METHODS: A retrospective cohort of 133 children under 5 years of age with pneumococcal disease was obtained from a Brazilian public hospital. Resource use data derived from medical records review. For the pneumococcal pneumonia patients, we calculated the mean length of stay and the costs per hospitalization were estimated using a standardized mean payment per day provided by the hospital (health provider perspective). We used the Tabwin software to tabulate data from the Hospitalization Information System regarding the reimbursement for pneumococcal pneumonia-related hospitalizations under 5 years of age in 2007. Secondary outcomes related to meningitis, acute otitis media and sepsis were analyzed. RESULTS: The mean length of stay (LOS) in the pneumonia cohort was 13.5 days. Of 75 patients with pneumonia included in this study, 11 (14,7%) had been admitted in the ICU (mean LOS in ICU: 5.8 days; overall LOS of those patients: 19.5 days) and 64 (85,3%) had remained only in non-ICU units (mean LOS: 12.4 days). The average hospitalization cost per child under 5 years of age was BRL 6612 (US$4723; 2005 purchasing power parity index 1USD 1.4BRL). The average reimbursement by the government to the hospital per pneumococcal pneumonia-related hospitalization was BRL 694 (US$495), approximately 10% of the hospital expenditure to treat the pneumonia patients. CONCLUSIONS: We estimated a mean hospitalization cost with pneumococcal pneumonia in public hospitals 10 times higher than the official reimbursement. These results suggests that prevention strategies as vaccination may play an important role in reducing the burden of pneumonia on public hospitals and health care system.
Value in Health, 2009
that reported the best HrQoL in Latin America were Uruguay and Paraguay. Brazil in contrast, repo... more that reported the best HrQoL in Latin America were Uruguay and Paraguay. Brazil in contrast, reported the worst HrQoL, as it had the highest proportion of people reporting a poor or very poor health-status. When comparing HrQoL between countries using logistic regression, significant differences were found in the HrQOL for the 6 nations. Results persisted after adjusting for mentioned socio-demographic variables. CONCLUSIONS: Our study supported the usefulness and importance of measuring HrQoL and showed that real differences in self-perceived health exist between Latin American countries. Future research should consider cultural aspects like language, ethnicity or macro indicators such as unemployment rates or gross domestic product per capita in a multilevel analysis, for further understanding of HrQoL in Latin America.
Journal of VLSI Signal …, 2007
This paper presents a comprehensive analysis of the design of custom instructions in a reconfigur... more This paper presents a comprehensive analysis of the design of custom instructions in a reconfigurable hardware platform dedicated to accelerate arithmetic operations in the binary field $$\mathbb{F}_{{2^{{163}} }} $$ , using a Gaussian normal basis representation. The ...

The complexity of modern hardware design has created the need for higher levels of abstraction, w... more The complexity of modern hardware design has created the need for higher levels of abstraction, where system modeling is used to integrate modules into complex Systemon-Chip (SoCs) platforms. SystemC, and its TLM (Transaction Level Modeling) extensions, have been used for this purpose mainly because of their fast prototyping and simulation features, which allow for early design space exploration. This paper proposes an approach to explore and interact with SystemC models by means of an introspection technique known as Computational Reflection. We use reflection to implement a white-box introspection mechanism called ReflexBox. We show that ReflexBox is a fast, non-intrusive technique that can be used to dynamically gather and inject stimuli into any SystemC module, without the need to use a proprietary SystemC implementation, change the SystemC library, instrument or even inspect the module source code. Our approach can be used to support many different verification tasks like platform debugging, performance evaluation and communication analysis. To show ReflexBox effectiveness we used it in three platforms case studies to address tasks like register inspection, performance analysis and signal replaying for testbench reuse. In all cases we assumed no source code availability and measured the impact on the overall platform performance.
Revista Brasileira de …, 2007
The purpose of the present study was to investigate the physiological responses to intense exerci... more The purpose of the present study was to investigate the physiological responses to intense exercise in obese Wistar rats treated with metformin. To induce obesity, all animals were infused with monosodic glutamate (4 mg/g of body weight) via subcutaneous injection. ...
By a bi-regular cage of girth g we mean a graph with prescribed degrees r and m and with the leas... more By a bi-regular cage of girth g we mean a graph with prescribed degrees r and m and with the least possible number of vertices denoted by f ({r, m}; g). We provide new upper and lower bounds of f ({r, m}; g) for even girth g ≥ 6. Moreover, we prove that f ({r, k(r − 1) + 1}; 6) = 2k(r − 1) 2 + 2r where k ≥ 2 is any integer and r − 1 is a prime power. This result supports the conjecture f ({r, m}; 6) = 2(rm − m + 1) for any r < m formulated by Yuansheng and Liang (The minimum number of vertices with girth 6 and degree set D = {r, m}, Discrete Mathematics 269 (2003), 249-258).
The first known families of cages arised from the incidence graphs of generalized polygons of ord... more The first known families of cages arised from the incidence graphs of generalized polygons of order q, q a prime power. In particular, (q + 1, 6)-cages have been obtained from the projective planes of order q. Morever, infinite families of small regular graphs of girth 5 have been constructed performing algebraic operations on F q. In this paper, we introduce some combinatorial operations to construct new infinite families of small regular graphs of girth 7 from the (q + 1, 8)-cages arising from the generalized quadrangles of order q, q a prime power.
PAPIA: Revista Brasileira de Estudos Crioulos e …, 2008
The aim of this article is to describe some aspects of the linguistic variation in Sãotomense, a ... more The aim of this article is to describe some aspects of the linguistic variation in Sãotomense, a Portuguese-based Creole spoken in São Tomé, based on Ferraz (1979), Graham & Graham (2004), and Araujo (2006). Ferraz (1979) described the phonological system of Sãotomense, however his description ignore any variation at any level. In this paper, we will show evidence that variation on the syllabic level is widespread. The growing infl uence European Portuguese through literacy is only one of the many issues as far as variation is concerned. We will also present a discussion on the phoneme borrowing, especially at coda position.

Proceedings 12th International Symposium on System Synthesis, 1999
Decreasing the program size has become an important goal in the design of embedded systems target... more Decreasing the program size has become an important goal in the design of embedded systems target to mass production. This problem has led to a number of efforts aimed at designing processors with shorter instruction formats (e.g. ARM Thumb and MIPS16), or that can execute compressed code (e.g. IBM CodePack PowerPC). Much of this work has been directed towards RISC architectures though. This paper proposes a solution to the problem of executing compressed code on embedded DSPs. The experimental results reveal an average compression ratio of 75% for typical DSP programs running on the TMS320C25 processor. This number includes the size of the decompression engine. Decompression is performed by a state machine that translates codewords into instruction sequences during program execution. The decompression engine is synthesized using the AMS standard cell library and a 0.6m 5V technology. Gate level simulation of the decompression engine reveals minimum operation frequencies of 150MHz.

Program performance can be dynamically improved by optimizing its frequent execution traces. Once... more Program performance can be dynamically improved by optimizing its frequent execution traces. Once traces are collected, they can be analyzed and optimized based on the dynamic information derived from the program's previous runs. The ability to record traces is thus central to any dynamic binary translation system. Recording traces, as well as loading them for use in different runs, requires code replication in order to represent the trace. This paper presents a novel technique which records execution traces by using an automaton called TEA (Trace Execution Automata). Contrary to other approaches, TEA stores traces implicitly, without the need to replicate execution code. TEA can also be used to simulate the trace execution in a separate environment, to store profile information about the generated traces, as well to instrument optimized versions of the traces. In our experiments, we showed that TEA decreases memory needs to represent the traces (nearly 80% savings).

Microcode Compression Using Structured-Constrained Clustering
ABSTRACT Modern microprocessors have used microcode as a way to implement legacy (rarely used) in... more ABSTRACT Modern microprocessors have used microcode as a way to implement legacy (rarely used) instructions, add new ISA features and enable patches to an existing design. As more features are added to processors (e.g. protection and virtualization), area and power costs associated with the microcode memory increased significantly. A recent Intel internal design targeted at low power and small footprint has estimated the costs of the microcode ROM to approach 20% of the total die area (and associated power consumption). Moreover, with the adoption of multicore architectures, the impact of microcode memory size on the chip area has become relevant, forcing industry to revisit the microcode size problem. A solution to address this problem is to store the microcode in a compressed form and decompress it at runtime. This paper describes techniques for microcode compression that achieve significant area and power savings, while proposes a streamlined architecture that enables high throughput within the constraints of a high performance CPU. The paper presents results for microcode compression on several commercial CPU designs which demonstrates compression ratios ranging from 50 to 62%. In addition, it proposes techniques that enable the reuse of (pre-validated) hardware building blocks that can considerably reduce the cost and design time of the microcode decompression engine in real-world designs.
Shrinking microprocessor feature size will increase the soft-error rates to unacceptable levels i... more Shrinking microprocessor feature size will increase the soft-error rates to unacceptable levels in the near future. While reliable systems typically employ hardware techniques to address soft-errors, software techniques can provide a less expensive and more flexible alternative. This paper presents a control-flow error classification and proposes new software based control-flow error detection techniques. The new techniques are better than the previous ones in the sense that they detect errors in all the brancherror categories. We also compare the performance of our new techniques with that of the previous ones using our dynamic binary translator.
Shrinking microprocessor feature size and growing transistor density may increase the soft-error ... more Shrinking microprocessor feature size and growing transistor density may increase the soft-error rates to unacceptable levels in the near future. While reliable systems typically employ hardware techniques to address soft-errors, software-based techniques can provide a less expensive and more flexible alternative. This paper presents a control-flow error classification and proposes two new software-based comprehensive control-flow error detection techniques. The new techniques are better than the previous ones in the sense that they detect errors in all the branch-error categories. We implemented the techniques in our dynamic binary translator so that the techniques can be applied to existing x86 binaries transparently. We compared our new techniques with the previous ones and we show that our methods cover more errors while has similar performance overhead.
Uploads
Papers by geraldine araujo