Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1999, Proceedings of the 36th annual ACM/IEEE Design Automation Conference
AI
Quality benchmarks are crucial in the realm of electronic design automation (EDA), serving as a means to qualitatively evaluate CAD techniques. Current benchmark suites are often limited, featuring small and esoteric modules that do not adequately represent the diversity of designs in the industry. This paper introduces the concept of vertical benchmarks, exemplified by the CMU-DSP, which encompasses multiple design representations at different abstraction levels and includes a complete design flow. Vertical benchmarks address the shortcomings of existing benchmarks by allowing for a comprehensive evaluation of CAD techniques, considering system-level performance and facilitating better analysis of downstream effects.
WSEAS Transactions on Circuits and Systems archive, 2008
This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious problems related to quality of benchmarking and use of practical industrial benchmarks, we proposed an adequate benchmarking methodology based on the statistical experimental design approach, and developed corresponding digital circuit benchmark generators. These benchmark generators enable research, evaluation and fine-tuning of circuit synthesis methods and EDA-tools largely independent of the actual industrial benchmarks, and much better than having only some industrial benchmarks. Using the results of extensive experiments that involved large sets of diverse benchmarks generated with our FSM benchmark generator, we discuss several crucial problems of benchmarking and demonstrate how to resol...
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2000
For the development and evaluation of CADtools for partitioning, floorplanning, placement, and routing of digital circuits, a huge amount of benchmark circuits with suitable characteristic parameters is required. Observing the lack of industrial benchmark circuits available for use in evaluation tools, one could consider to actually generate synthetic circuits. In this paper, we extend a graph-based benchmark generation method to include functional information. The use of a user-specified component library, together with the restriction that no combinational loops are introduced, now broadens the scope to timing-driven and logic optimizer applications. Experiments show that the resemblance between the characteristic Rent curve and the net degree distribution of real versus synthetic benchmark circuits is hardly influenced by the suggested extensions and that the resulting circuits are more realistic than before. An indirect validation verifies that existing partitioning programs have comparable behavior for both real and synthetic circuits. The problems of accounting for timing-aware characteristics in synthetic benchmarks are addressed in detail and suggestions for extensions are included.
Proceedings of the 1999 international symposium on Physical design - ISPD '99, 1999
For the development and evaluation of CAD-tools for partitioning, floorplanning, placement, and routing of digital circuits, a huge amount of benchmark circuits with suitable characteristic parameters is required. Observing the lack of industrial benchmark circuits for use in evaluation tools, one could consider to actually generate such circuits. In this paper, we extend a graph-based benchmark generation method to include functional information. The use of a user-specified component library, together with the restriction that no combinational loops are introduced, now broadens the scope to timing-driven and logic optimizer applications. Experiments show that the resemblance between the characteristic Rent curve and the net degree distribution of real versus synthetic benchmark circuits is hardly influenced by the suggested extensions and that the resulting circuits are more realistic than before. However, the synthetic benchmark circuits are still very redundant, compared to existing sets of real benchmarks. It is shown that a correlation exists between the degree of redundancy and key circuit parameters.
2000
The shared-memory, multi-threaded PARSEC benchmark suite is intended to represent emerging software work- loads for future systems. It is specifi cally intended for use by both industry and academia as a tool for testing new Chip Multiprocessor (CMP) designs. We analyze the suite in detail and identify bottlenecks using hardware per- formance counters. We take a systems-level approach, with an
ABSTRACT Many groups in academia and industry are now extending their tools to handle super-sized designs. In addition to scalable algorithms, this requires software infrastructure and development policies to ensure and verify the robustness and scalability of implementations. Specific issues include programming for and executing programs in 32-bit and 64-bit memory models, modular tool infrastructure, diagnostic visualization of super-sized designs, and the use of simulation clusters for thorough evaluation of CAD tools.
Journal of Signal Processing Systems, 2008
We present a performance analysis framework that efficiently generates and analyzes hardware architectures for computationally intensive signal processing applications. Our framework synthesizes designs from a high level of abstraction into well-constructed and recognizable hardware structures that perform well in terms of area, throughput and power dissipation. Cost functions provided by our framework allow the user to reduce the design space to a set of efficient hardware architectures that meet performance constraints. We utilize our framework to estimate hardware performance using a set of pre-synthesized mathematical cores which expedites the synthesis process by approximately 14 fold. This reduces the architectural generation and hardware synthesis process from days to several hours for complex designs. Our work aims at performing hardware optimizations at the architectural and arithmetic levels, relieving the user from manually describing the designs at the RTL and iteratively varying the hardware architectures. We illustrate the efficiency and accuracy of our framework by generating finite impulse response (FIR) filter structures used in several signal processing applications such as adaptive equalizers and quadrature mirror filters. The results show that hardware filter structures generated by our framework can achieve, on average, a 3 fold increase in power efficiency when compared to manually constructed designs. Ramsey Hourani received his B.S. degree in Electrical and Computer Engineering from Iowa State University, Ames, in 1998 and M.S. degree in Electrical and Computer Engineering from North Carolina State University, Raleigh, in 2001. He is currently working toward the Ph.D. degree in electrical engineering at the same university. He worked two years with the Cellular Subscriber Sector for Motorola as a hardware designer for CDMA wireless systems. His research interests include developing efficient hardware architectures that map digital signal and image processing applications onto ASICs and FPGAs. Ravi Jenkal received his B.E. degree in Electronics and Communications from S. J. College of Engineering and M.S. degree in Electrical and Computer Engineering from North Carolina State University, Raleigh, in 2004. He is currently working toward the Ph.D. degree in Computer Engineering at the same university. His research interests include the creation of novel architectural solutions for Multi-Antenna Systemson-a-Chip (SoC) solutions, 3DIC, low-power and high-speed ASIC/SoC design methods and highlevel performance estimation.
Citeseer
CiteSeerX - Document Details (Isaac Councill, Lee Giles): Many groups in academia and industry are now extending their tools to handle super-sized designs. In addition to scalable algorithms, this requires software infrastructure and development policies to ensure and verify the ...
Proceedings 1994 IEEE International Conference on Computer Design: VLSI in Computers and Processors
This paper describes the large scale application of logic synthesis and formal ver$cation to the design of the CPU and cache of the high-end series of the Bull DPS7OOO maiqframe f m ' l y. The logic CAD suite used for supporting this application proved its cffciency on the design qfhigh-peflonnance circuits.
2006
Integrated Circuit technology (IC) is the enabling technology for a whole host of innovative devices a nd systems that have changed the way we live. Integrat d Circuits are much smaller and consume less power than the discrete components used to build electron i systems before the 1960s. Integrated circuits ar e also easier to design and manufacture and are more reliable than discrete systems. The growing sophistication of applications continually pushes t he design and manufacturing of integrated circuits and electronic systems to new levels of complexity. Due to major advances in the development of electronic s and miniaturization, vendors are capable of buildin g and designing products with increasingly greater functionality, higher performance, lower cost, lowe r power consumption, and smaller dimensions [1]. However, the bottleneck for some vendors appears to be the ability of designers to target the necessar y increase in the complexity of electronic devices. F urthermore, the...
IBM Journal of Research and Development, 2000
… Conference on VLSI, 2002
For the development and evaluation of new algorithms, architectures and technologies, a huge amount of benchmark circuits with suitable characteristic parameters are required. Synthetic circuits are a viable alternative to real circuits for compiling benchmark suites. A major advantage of synthetic benchmark circuits is that full control of the important parameters is provided. In this paper, an existing netlist generation algorithm based on bottom-up clustering of subcircuits according to Rent's rule is extended to generate circuits that are more realistic than before. The stochastic properties of the Rent behavior are taken into account, and improvements have been made to increase the accuracy of the imposed Rent characteristics. This guarantees a realistic structure of the interconnection topology, which can be adjusted in a controlled manner. A scheme for combinational loop prevention has been augmented with a delay control mechanism, such that they are truly suitable for timing-driven applications. An indirect validation approach is used to verify that existing placement algorithms exhibit comparable behavior for both real and synthetic circuits.
2017
In this paper, we study exact multi-level logic benchmarks. We refer to an exact logic benchmark, or exact benchmark in short, as the optimal implementation of a given Boolean function, in terms of minimum number of logic levels and/or nodes. Exact benchmarks are of paramount importance to design automation because they allow engineers to test the efficiency of heuristic techniques used in practice. When dealing with two-level logic circuits, tools to generate exact benchmarks are available, e.g., espresso-exact, and scale up to relatively large size. However, when moving to modern multi-level logic circuits, the problem of deriving exact benchmarks is inherently more complex. Indeed, few solutions are known. In this paper, we present a scalable method to generate exact multi-level benchmarks with the optimum, or provably close to the optimum, number of logic levels. Our technique involves concepts from graph theory and joint support decomposition. Experimental results show an asymptotic exponential gap between state-ofthe-art synthesis techniques and our exact results. Our findings underline the need for strong new research in logic synthesis.
Integration, the VLSI Journal, 1999
For the development and evaluation of CAD-tools for the layout, placement, and routing of digital designs and for the evaluation of new computer hardware, a huge amount of benchmark circuits is required. Observing the lack of enough real benchmark designs for use in evaluation tools, one could consider to actually generate such benchmarks. In that case, it is very important that those synthetic benchmarks have the same characteristics as real designs. This paper describes and evaluates a new benchmark generation procedure that produces benchmarks with characteristics, similar to those of real benchmark designs. It will be shown that, in this respect, our new technique outperforms an existing method, presented by Darnauer and Dai 1].
Lecture Notes in Computer Science, 1995
This document describes the IFIP WG10.2 hardware-verification benchmark circuits, intended for evaluating different approaches and algorithms for hardware verification. The paper presents the rationale behind the circuits, describes them briefly and indicates how to get access to the verification benchmark set.
Papers on Twenty-five years of electronic design automation - 25 years of DAC, 1988
This paper illustrator* tho methodology of the CMU Design Automation System by presenting, an automated design of the PDP-8/E data palhs from a functional description. This automated clnsign (using synthesis techniques) is compared both to DECV, implementation and the Intersil single chip implementation. AG it is becoming possible to integrate larger numbers of logic components on a single chip, the need for more powerful design aide is becoming apparent. Indeed, these aids must be capable of supporting a designer from the system level of design clown to I ho mask level. In this way the systems level designer can become more aware of the implications of higher-level design tradeoffs on implementation properties such as silicon area, power consumption, testability, and speed, and ho able to make more timely use of new technologies. The ultimate goal of the Carnegie-Mellon University Design Automation (CMU-DA) System [12] is to provide a technology-relative, structured-design aid to help the hardware designer explore a larger number of possible design implementations. Inputs to the system are a behavioral description of the system to be designed, an objective function which specifics the user's optimization criteria, and a data base specifying the hardware components available to the design system. The CMU-DA system differs from other design automation systems because the input design description is a functional specification. Such a specification provides a model that, while accurately characterizing the input-output behavior desired for the implementation, doer, not necessarily specify its internal structure. Tho system software collectively performs the synthesis function by transforming the input functional description into a structural description. The design process involves binding implement at ion decisions in a lop-clown manner as a design proceed?, through the design system. More structural decisions are made at each level until a complete hardware specification if. obtained, with the most influential design trade offt. being performed first in order to cut clown 'the design search space. The purpose of this paper is to illustrate the methodology of the CMU-DA system. Tho results given here arc worst casemany optimizations which are straightforward have not been implemented yet; research is in progress on others. The design of the data part of a DEC PDP-3/E [5] from the ISP level through to a TIL and standard coll design will be discussed. Only the subset of the full DA system which is presently implemented has been used for this example. The .
Computer architecture news, 1986
Proc. of PATMOS'97, 1997
Solid-State Electronics, 2014
This work presents the methodology employed in order to make the MASTAR model (Model for Assessment of CMOS Technologies And Roadmaps [1]), used within the frame of the International Technology Roadmap for Semiconductor (ITRS), compatible with conventional CAD tools. As an example, we used the updated model together with ELDO for the evaluation of digital and SRAM performance.
2016
We present experimental evidence that logic synthesis procedure, especially those based on resynthesis, do net perform well when the original (designer-given) structure of input description is lost. As such performance has not been observed otherwise, we must conclude that such operation is outside of the intended range, and that synthesis examples with their original structure lost are not valid for evaluation of synthesis procedures. We also outline other causes that may render an example invalid. We, however, document that such losses did occur with circuit examples circulating in the logic synthesis community. Therefore, we have to suggest what constitutes prudence in examples collection.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.