Papers by Abhishek Ranjan

As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingl... more As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingly important as they can be used to provide high-quality and computationally scalable solutions. However, modern FPGA architectures incorporate heterogeneous resources, which place additional requirements on the partitioning algorithms because they now need to not only minimize the cut and balance the partitions, but also they must ensure that none of the resources in each partition is oversubscribed. In this paper, we present a number of multilevel multi-resource partitioning algorithms that are guaranteed to produce solutions that balance the utilization of the different resources across the partitions. We evaluate our algorithms on twelve industrial benchmarks ranging in size from 5,236 to 140,118 vertices and show that they achieve minimal degradation in the min-cut while balancing the various resources. Comparing the quality of the solution produced by some of our algorithms against that produced by hMETIS, we show that our algorithms are capable of balancing the different resources while incurring only a 3.3%-5.7% higher cut.
Multi-Million Gate FPGA Physical Design Challenges
ABSTRACT The recent past has seen a tremendous increase in the size of design circuits that can b... more ABSTRACT The recent past has seen a tremendous increase in the size of design circuits that can be implemented in a single FPGA. These large design sizes significantly impact cycle time due to design automation software runtimes and an increased number of performance ...

Multi-million gate FPGA physical design challenges
The recent past has seen a tremendous increase in the size of design circuits that can be impleme... more The recent past has seen a tremendous increase in the size of design circuits that can be implemented in a single FPGA. These large design sizes significantly impact cycle time due to design automation software runtimes and an increased number of performance based iterations. New FPGA physical design approaches need to be utilized to alleviate some of these problems. Hierarchical approaches to divide and conquer, the design, early estimation tools for design exploration, and physical optimizations are some of the key methodologies that have to be introduced in the FPGA physical design tools. This paper will investigate the loss/benefit in quality of results due to hierarchical approaches and compare and contrast some of the design automation problem formulations and solutions needed for FPGAs versus known standard cell ASIC approaches.
We propose fresher looks into already existing hierarchical partitioning based floorplan design m... more We propose fresher looks into already existing hierarchical partitioning based floorplan design methods and their relevance in providing faster alternatives to conventional approaches. We modify the existing partitioning based floor-planner to handle congestion and timing. We also explore the applicability of traditional sizing theorem for combining two modules based on their sizes and interconnecting wirelength. The results show that our floorplanning approach can produce floorplans hundred times faster and at the same time achieving better quality (on average 20% better wirelength, better congestion and better timing optimization) than that of pure simulated annealing based floorplanner

As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingl... more As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingly important as they can be used to provide high-quality and computationally scalable solutions. However, modern FPGA architectures incorporate heterogeneous resources, which place additional requirements on the partitioning algorithms because they now need to not only minimize the cut and balance the partitions, but also they must ensure that none of the resources in each partition is oversubscribed. In this paper, we present a number of multilevel multi-resource partitioning algorithms that are guaranteed to produce solutions that balance the utilization of the different resources across the partitions. We evaluate our algorithms on twelve industrial benchmarks ranging in size from 5,236 to 140,118 vertices and show that they achieve minimal degradation in the min-cut while balancing the various resources. Comparing the quality of the solution produced by some of our algorithms against that produced by hMETIS, we show that our algorithms are capable of balancing the different resources while incurring only a 3.3%-5.7% higher cut.

As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingl... more As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingly important as they can be used to provide high-quality and computationally scalable solutions. However, modern FPGA architectures incorporate heterogeneous resources, which place additional requirements on the partitioning algorithms because they now need to not only minimize the cut and balance the partitions, but also they must ensure that none of the resources in each partition is oversubscribed. In this paper, we present a number of multilevel multi-resource partitioning algorithms that are guaranteed to produce solutions that balance the utilization of the different resources across the partitions. We evaluate our algorithms on twelve industrial benchmarks ranging in size from 5,236 to 140,118 vertices and show that they achieve minimal degradation in the min-cut while balancing the various resources. Comparing the quality of the solution produced by some of our algorithms against that produced by hMETIS, we show that our algorithms are capable of balancing the different resources while incurring only a 3.3%-5.7% higher cut.

In many applications such as high-level synthesis (HLS) and logic synthesis and possibly engineer... more In many applications such as high-level synthesis (HLS) and logic synthesis and possibly engineering change order (ECO) we would like to get fast and accurate estimations of different performance measures of the chip, namely area, delay and power consumption. These measures cannot be estimated with high accuracy unless a fairly detailed layout of the chip, including the floorplan and routing is available, which in turn are very costly processes in terms of running time. As we have entered the deep sub-micron era, we have to deal with designs which contain million gates and up. Not only we should consider the area occupied by the modules, but we also have to consider the wiring congestion. In this paper we propose a cost function that is, in addition to other parameters, a function of the wiring area. We also propose a method, to avoid running the floorplanning process after every change in the design, by considering the possible changes in advance and generating a floorplan which is tolerant to these modifications, i.e., the changes in the netlist does not dramatically change the performance measures of the chip. Experiments are done in the high-level synthesis domain, but the method can be applied to logic synthesis and ECO as well. We gain speedups of 184% on the average over the traditional estimation methods used in HLS.
Floorplanner 1000 Times Faster: A Good Predictor and Constructor
Multi-Million Gate FPGA Physical Design Challenges
ABSTRACT The recent past has seen a tremendous increase in the size of design circuits that can b... more ABSTRACT The recent past has seen a tremendous increase in the size of design circuits that can be implemented in a single FPGA. These large design sizes significantly impact cycle time due to design automation software runtimes and an increased number of performance ...

The recent past has seen a tremendous increase in the size of design circuits that can be impleme... more The recent past has seen a tremendous increase in the size of design circuits that can be implemented in a single FPGA. The size and complexity of modern FPGAs has far outpaced the innovations in FPGA physical design. The problems faced by FPGA designers are similar in nature to those that preoccupy ASIC designers, namely, interconnect delays and design management. However, this paper will show that a simple retargeting of ASIC physical design methodologies and algorithms to the FPGA domain will not suffice. We will show that several well researched problems in the ASIC world need new problem formulations and algorithms research to be useful for today 's FPGAs. Partitioning, floorplanning, placement, delay estimation schemes are only some of the topics that need complete overhaul. We will give problem formulations, motivated by experimental results, for some of these topics as applicable in the FPGA domain.

IEEE Transactions on Very Large Scale Integration Systems, 2001
Floorplanning is a crucial phase in VLSI physical design. The subsequent placement and routing of... more Floorplanning is a crucial phase in VLSI physical design. The subsequent placement and routing of the cells/modules are coupled very closely with the quality of the floorplan. A widely used technique for floorplanning is simulated annealing. It gives very good floorplanning results but has major limitation in terms of run time. For circuit sizes exceeding tens of modules simulated annealing is not practical. Floorplanning forms the core of many synthesis applications. Designers need faster prediction of system metrics to quickly evaluate the effects of design changes. Early prediction of metrics is imperative for estimating timing and routability. In this work we propose a constructive technique for predicting floorplan metrics. We show how to modify the existing top-down partitioning-based floorplanning to obtain a fast and accurate floorplan prediction. The prediction gets better as the number of modules and flexibility in the shapes increase. We also explore applicability of the traditional sizing theorem when combining two modules based on their sizes and interconnecting wirelength. Experimental results show that our prediction algorithm can predict the area/length cost function normally within 5-10% of the results obtained by simulated annealing and is, on average, 1000 times faster.
Layout aware retiming
Page 1. LAYOUT AWARE RETIMING A. Ranjan 1 A. Srivastava 2 V. Karnam 3 M. Sarrafzadeh 4 1 Monterey... more Page 1. LAYOUT AWARE RETIMING A. Ranjan 1 A. Srivastava 2 V. Karnam 3 M. Sarrafzadeh 4 1 Monterey Design Systems, Sunnyvale, CA. ... [17] N. Shenoy and R. Rudell, "Efficient Implemen-tation of Retiming', ICCAD'94, pp. 226-233. ...

Atypical Ductal Hyperplasia in Stereotactic Breast Biopsies: Enhanced Accuracy of Diagnosis with the Mammotome
Breast Journal, 2001
There is little literature assessing the incidence of subsequent carcinoma in patients diagnosed ... more There is little literature assessing the incidence of subsequent carcinoma in patients diagnosed with atypical ductal hyperplasia (ADH) by mammotome. We reviewed 216 stereotactic mammotome biopsies (SMBs) and compared the results to the 121 automated tru-cut biopsies (ATC) performed at our breast care center from June 1994 to July 1998. The median age in the mammotome series was 57 years, compared to 56 years in the ATC group. An increase in biopsies for microcalcifications (49% versus 41%) was noted in the SMB series. This was accompanied by an increase in the number of cases with a diagnosis of pure ductal carcinoma in situ (DCIS) (10% versus 4%). Compared to the tru-cut, in which 38% (3 of 8) of the cases diagnosed as atypical hyperplasia (AH) showed DCIS and/or invasive carcinoma on open biopsy, none of the cases diagnosed as AH on mammotome revealed carcinoma on open biopsy. ADH is more accurately diagnosed with SMB than by the ATC method and may not be an indication for subsequent open biopsy.
Uploads
Papers by Abhishek Ranjan