Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2022, arXiv (Cornell University)
As it is pretty sure that Moore's law will end some day, questioning about the post-Moore era is more than interesting. Similarly, looking for new computing paradigms that could provide solutions is important. Revisiting the history of digital electronics since the 60's provide significant insights on the conditions for the success of a new emerging technology to replace the currently dominant one. Specifically, the past shows when constraints and « walls » have contribute to evolution through improved technical techniques and when they have provoked changes of technologies (evolution versus breakthrough). The main criteria for a new technology or a new computing paradigm is a significant performance improvement (at least one order of magnitude). Cost, space requirement, power and scalability are the other important parameters.
Science, 2020
From bottom to top The doubling of the number of transistors on a chip every 2 years, a seemly inevitable trend that has been called Moore's law, has contributed immensely to improvements in computer performance. However, silicon-based transistors cannot get much smaller than they are today, and other approaches should be explored to keep performance growing. Leiserson et al. review recent examples and argue that the most promising place to look is at the top of the computing stack, where improvements in software, algorithms, and hardware architecture can bring the much-needed boost. Science , this issue p. eaam9744
2008
Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above.
2009
&OVER THE PAST decade, we have witnessed farreaching changes in the IT field. Semiconductor sales for consumer and communication devices now surpass those for traditional computation. The IT infrastructure is moving away from the desktop and laptop model to centralized servers, communicating with ubiquitously distributed (and often mobile) access devices. Sensor networks and distributed information-capture devices are fundamentally changing the nature of the Internet from download centric to upload rich (see Figure 1). Whereas today a billion mobile phones are sold per year, in the near future perhaps upwards of a trillion sensory nodes per year will be sold and deployed, with the majority of these connected wirelessly. User interfaces and humanmachine interactions could become responsible for a large percentage of the computational needs. This has the potential to fundamentally change the ways we interact with and live in this information-rich world. This evolution of the IT platform is bound to have a profound impact on the semiconductor business and its operational models. Although Moore's law will still fuel the development of ever more complex devices at lower cost, the nature of these computational and communication devices will probably be substantially different from what we know today, potentially combining hundreds of processing cores. Moving from the core to the fringes of the network, computational prowess will play a less dominant role, and low-power, small form-factor integration of sensors, communication interfaces, and energy sources will be of the essence. It is safe to presume that the ''More than Moore'' and ''Beyond Moore'' paradigms will prevail. 1
Proceedings of the VLDB Endowment
With the end of Moore's Law, database architects are turning to hardware accelerators to offload computationally intensive tasks from the CPU. In this paper, we show that accelerators can facilitate far more than just computation: they enable algorithms and data structures that lavishly expand computation in order to optimize for disparate cost metrics. We introduce the Pliops Extreme Data Processor (XDP), a novel storage engine implemented from the ground up using customized hardware. At its core, XDP consists of an accelerated hash table to index the data in storage using less memory and fewer storage accesses for queries than the best alternative. XDP also employs an accelerated compressor, a capacitor, and a lock-free RAID sub-system to minimize storage space and recovery time while minimizing performance penalties. As a result, XDP overcomes cost contentions that have so far been inescapable.
Lecture Notes in Computer Science, 2010
Progress in computer technology over the last four decades has been spectacular, driven by Moore's Law which, though initially an observation, has become a selffulfilling prophecy and a board-room planning tool.
2010 Second International Conference on Communication Software and Networks, 2010
Conventional silicon based computing technology has reached to its upper physical limits of Design complexity, processing power, memory, energy consumption, density and heat dissipation. Therefore, there is need of searching for new alternative computing Medias, which can overcome all these conventional computation problems. The structure and type of these new alternative computing paradigms is a major challenge.
Parallel Computing, 2001
The series is named in honour of George Boole, the first professor of Mathematics at UCC, whose seminal work on logic in the late 1800s is central to modern digital computing. To mark this great contribution, leaders in the fields of computing and mathematics are invited to talk to the general public on directions in science, on past achievements and on visions for the future.
Proceedings of the International Astronomical Union
According to aTop500.orgcompilation, large computer systems have been doubling in sustained speed every 1.14 years for the last 17 years. If this rapid growth continues, we will have computers by 2020 that can execute an Exaflop (1018) per second. Storage is also improving in cost and density at an exponential rate. Several innovations that will accompany this growth are reviewed here, including shrinkage of basic circuit components on Silicon, three-dimensional integration, and Phase Change Memory. Further growth will require new technologies, most notably those surrounding the basic building block of computers, the Field Effect Transistor. Implications of these changes for the types of problems that can be solved are briefly discussed.
IEE Proceedings - Nanobiotechnology, 2004
All modern computers are designed using the 'von Neumann' architecture and built using silicon transistor technology. Both architecture and technology have been remarkably successful. Yet there are a range of problems for which this conventional architecture is not particularly well adapted, and new architectures are being proposed to solve these problems, in particular based on insight from nature. Transistor technology has enjoyed 50 years of continuing progress. However, the laws of physics dictate that within a relatively short time period this progress will come to an end. New technologies, based on molecular and biological sciences as well as quantum physics, are vying to replace silicon, or at least coexist with it and extend its capability. The paper describes these novel architectures and technologies, places them in the context of the kinds of problems they might help to solve, and predicts their possible manner and time of adoption. Finally it describes some key questions and research problems associated with their use.
2016
Abstract-As the increasing reliance on information technology and computer power, to make information systems more efficient or advanced research more possible, has become so pervasive, the underlying technology, based on Moore’s law that made it possible, is coming to an end, at least in its present model. This paper explores alternative computer frameworks and architectures to increase the speed and power of computing, namely the multicore and the many-core models.
Innovative Research and Applications in Next-Generation High Performance Computing, 2016
2000
In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance
Progress in computer technology over the last four decades has been spectacular, driven by Moore's Law which, though initially an observation, has become a selffulfilling prophecy and a board-room planning tool. Although Gordon Moore expressed his vision of progress simply in terms of the number of transistors that could be manufactured economically on an integrated circuit, the means of achieving this progress was based principally on shrinking transistor dimensions, and with that came collateral gains in performance, power-efficiency and, last but not least, cost.
Spectrum, IEEE, 1997
A simple observation, made over 30 years ago, on the growth in the number of devices per silicon die has become the central driving force of one of the most dynamic of the world's industries
IEEE Solid-State Circuits Magazine, 2009
© artville & photo f/X2 n 1960-ten years before Intel deve loped the first sin glechip CPU (micro computer central pro cessing unit)-the revolution that would ensue was inconceivable: the cost of computing dropped by a fac tor of a million, modes of personal communication changed forever, and intelligent machines took over processes in manufacturing, trans portation, medicine-virtually ev ery aspect of our lives. Certainly Moore's law-that the number of transistors on a chip dou bles every year, later amended to ev ery two years-is a dominant factor in this revolution. But at Intel, there were three other enabling conditions: a customer with a problem ■ ■ an applications engineering de ■ ■
Computer system architecture has been, and always will be, significantly influenced by the underlying trends and capabilities of hardware and software technologies. The
IEEE Circuits and Devices Magazine, 2006
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.