Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Encyclopedia of Information Science and Technology, Third Edition
AI
This article discusses the significance of decimal multiplication in computer systems, particularly in applications requiring high precision such as financial and commercial calculations. It examines the challenges of implementing decimal arithmetic in hardware compared to binary systems, due to representation and rounding errors. The paper delves into various algorithms for hardware design of decimal multiplication, including fixed-point and floating-point methods, and contrasts iterative with parallel solutions.
2002
Studying floating point arithmetic, authors have shown that the implemented operations (addition, subtraction, multiplication, division and square root) can compute a result and an exact correcting term using the same format as the inputs. Following a path initiated in 1965, all the authors supposed that neither underflow nor overflow occurred in the process. Overflow is not critical as some kind of exception is triggered by such an event that creates remanent non numeric quantities. Underflow may be fatal to the process as it returns wrong numeric values with little warning. Our new necessary and sufficient conditions guarantee that the exact floating point operations are correct when the result is a number. We also present properties when precise rounding is not available in hardware and faithful rounding alone is performed such as using some digital signal processing circuit. We have validated our proofs against the Coq automatic proof checker. Our development has raised many questions, some of them were expected while other ones were very surprising.
2006
A new modeless floating-point arithmetic called the precision arithmetic is developed to track, limit, and reject the accumulation of calculation errors during floating point calculations, by reinterpreting the polymorphic representation of the conventional floating point arithmetic. The validity of this strategy is demonstrated by tracking the calculation errors and by rejecting the meaningless results of a few representative algorithms in various conditions. Using this type, each algorithm seems to have a constant error ratio and a constant degradation ratio regardless of input data, and the error in significand seems to propagate very slowly according to a constant exponential distribution specific to the algorithm. In addition, the unfaithful artifact of discrete Fourier transformation is discussed.
2006
A new deterministic floating-point arithmetic called precision arithmetic is developed to track precision for arithmetic calculations. It uses a novel rounding scheme to avoid excessive rounding error propagation of conventional floating-point arithmetic. Unlike interval arithmetic, its uncertainty tracking is based on statistics and the central limit theorem, with a much tighter bounding range. Its stable rounding error distribution is approximated by a truncated normal distribution. Generic standards and systematic methods for validating uncertainty-bearing arithmetics are discussed. The precision arithmetic is found to be better than interval arithmetic in both uncertainty-tracking and uncertainty-bounding for normal usages. The precision arithmetic is available publicly at this http URL.
IEEE Transactions on Computers, 2009
The IEEE Standard 754-1985 for Binary Floating-Point Arithmetic [1] was revised [2], and an important addition is the definition of decimal floating-point arithmetic. This is intended mainly to provide a robust, reliable framework for financial applications that are often subject to legal requirements concerning rounding and precision of the results, because the binary floating-point arithmetic may introduce small but unacceptable errors. Using binary floating-point calculations to emulate decimal calculations in order to correct this issue has led to the existence of numerous proprietary software packages, each with its own characteristics and capabilities. IEEE 754R decimal arithmetic should unify the ways decimal floating-point calculations are carried out on various platforms. New algorithms and properties are presented in this paper which are used in a software implementation of the IEEE 754R decimal floatingpoint arithmetic, with emphasis on using binary operations efficiently. The focus is on rounding techniques for decimal values stored in binary format, but algorithms for the more important or interesting operations of addition, multiplication, division, and conversions between binary and decimal floating-point formats are also outlined. Performance results are included for a wider range of operations, showing promise that our approach is viable for applications that require decimal floating-point calculations.
Trends Journal of Sciences Research
The paper presented the results of the research related to the analysis of the reliability of computer calculations. Relevant examples of incorrect program operation were demonstrated: both quite simple and much less obvious, such as S. Rump's example. In addition to mathematical explanations, authors focused on purely software capabilities for controlling the accuracy of complex calculations. For this purpose, examples of effective use of the functionality of the decimal and fraction modules in Python 3.x were given.
ArXiv, 2020
The differences between the sets in which ideal arithmetics takes place and the sets of floating point numbers are outlined. A set of classical problems in correct numerical evaluation is presented, to increase the awareness of newcomers to the field. A self-defense, prophylactic approach to numerical computation is advocated.
Mathematical and Computer Modelling, 2009
Since radix-10 arithmetic has been gaining renewed importance over the last few years, high performance decimal systems and techniques are highly demanded. In this paper, a modification of the CORDIC method for decimal arithmetic is proposed so as to improve calculations. The algorithm works with BCD operands and no conversion to binary is needed. A significant reduction in the number of iterations in comparison to the original decimal CORDIC method is achieved. The experiments showing the advantages of the new method are described. Also, the results with regard to delay obtained by means of an FPGA implementation of the method are shown.
ArXiv, 2012
The study addresses the problem of precision in floating-point (FP) computations. A method for estimating the errors which affect intermediate and final results is proposed and a summary of many software simulations is discussed. The basic idea consists of representing FP numbers by means of a data structure collecting value and estimated error information. Under certain constraints, the estimate of the absolute error is accurate and has a compact statistical distribution. By monitoring the estimated relative error during a computation (an ad-hoc definition of relative error has been used), the validity of results can be ensured. The error estimate enables the implementation of robust algorithms, and the detection of ill-conditioned problems. A dynamic extension of number precision, under the control of error estimates, is advocated, in order to compute results within given error bounds. A reduced time penalty could be achieved by a specialized FP processor. The realization of a har...
In this paper we demonstrate how error-correcting addition and multiplication can be performed using self-checking modules. Our technique is based on the observation that a suitably designed full adder under the presence of any single stuck-at fault produces the fault-free complement of the desired output when fed by the complement of its functional input. We initially apply conventional parity-based error detection in arithmetic modules; upon detection of a fault, this is followed by input inversion, recomputation, and suitable output inversion. We present adder, register and multiplier designs that can be used in this context. We also design a large-scale circuit using this technique (an elliptical filter), outlining the area savings with respect to traditional triple modular redundancy
IBM Journal of Research and Development, 1999
We provide an overview of the notion of error tolerance and describe the context that motivated its development. We then present a summary of some of our case studies, which demonstrate the significant potential benefits of error tolerance. We present a summary of testing and design techniques that we have developed for error tolerant systems. Finally, we conclude by identifying shifts in paradigm required for wide exploitation of error tolerance.
18th IEEE Symposium on Computer Arithmetic (ARITH '07), 2007
The draft revision of the IEEE Standard for Floating-Point Arithmetic (IEEE P754) includes a definition for decimal floating-point (FP) in addition to the widely used binary FP specification. The decimal standard raises new concerns with regard to the verification of hardware-and software-based designs. The verification process normally emphasizes intricate corner cases and uncommon events. The decimal format introduces several new classes of such events in addition to those characteristic of binary FP. Our work addresses the following problem: Given a decimal floating-point operation, a constraint on the intermediate result, and a constraint on the representation selected for the result, find random inputs for the operation that yield an intermediate result compatible with these specifications. The paper supplies efficient analytic solutions for addition and for some cases of multiplication and division. We provide probabilistic algorithms for the remaining cases. These algorithms prove to be efficient in the actual implementation.
1972 IEEE 2nd Symposium on Computer Arithmetic (ARITH), 1972
IJCA Proceedings on International Conference on VLSI, Communications and Instrumentation (ICVCI)}
Floating-point representation can support a much wider range of values over fixed point representation. The performance of decimal floating-point operations is an important measure in many application domains such as financial, commercial, and internet-based computations. In this research, an iterative decimal floating-point multiplier design in IEEE 754-2008 format is proposed. This design uses a decimal fixed point multiplier using RPS algorithm that generates partial products for column accumulation from the least significant end in an iterative manner. It also incorporates the necessary decimal floating-point exponent processing, rounding and exception detection capability. The rounding process is initiated in parallel with the decimal fixed point multiplication of significand digits. The intermediate exponent, the product sign, sticky bit, round digit and the guard digit are determined on the fly with the accumulation of partial products. Simulation result for a 32-bit data in comparison with the existing designs in literature gives a delay reduction of 25.12%.
Technology audit and production reserves, 2023
The paper shows a well-known approach to the construction of cores in multi-core microprocessors, which is based on the application of a data flow graph-driven calculation model. The architecture of such kernels is based on the application of the reduced instruction set level data flow model proposed by Yale Patt. The object of research is a model of calculations based on data flow management in a multi-core microprocessor. The results of the floating-point multiplier development that can be dynamically reconfigured to handle five different formats of floating-point operands and an approach to the construction of an operating device for addition-subtraction of a sequence of floating-point numbers are presented, for which the law of associativity is fulfilled without additional programming complications. On the basis of the developed circuit of the floating-point multiplier, it is possible to implement various variants of the high-speed multiplier with both fixed and floating points, which may find commercial application. By adding memory elements to each of the multiplier segments, it is possible to get options for building very fast pipeline multipliers. The multiplier scheme has a limitation: the exponent is not evaluated for denormalized operands, but the standard for floating-point arithmetic does not require that denormalized operands be handled. In such cases, the multiplier packs infinity as the result. The implementation of an inter-core operating device of a floating-point adder-subtractor can be considered as a new approach to the practical solution of dynamic planning tasks when performing addition-subtraction operations within the framework of a multi-core microprocessor. The limitations of its implementation are related to the large amount of hardware costs required for implementation. To assess this complexity, an assessment of the value of the bits of its main blocks for various formats of representing floating-point numbers, in accordance with the floating-point standard, was carried out.
Soft Computing
We devise a variable precision floating-point arithmetic by exploiting the framework provided by the Infinity Computer. This is a computational platform implementing the Infinity Arithmetic system, a positional numeral system which can handle both infinite and infinitesimal quantities expressed using the positive and negative finite or infinite powers of the radix $${\textcircled {1}}$$ 1 . The computational features offered by the Infinity Computer allow us to dynamically change the accuracy of representation and floating-point operations during the flow of a computation. When suitably implemented, this possibility turns out to be particularly advantageous when solving ill-conditioned problems. In fact, compared with a standard multi-precision arithmetic, here the accuracy is improved only when needed, thus not affecting that much the overall computational effort. An illustrative example about the solution of a nonlinear equation is also presented.
This section is devoted to introducing review of general information concerned with the quantification of errors such as the concept of significant digits is reviewed. Significant digits: Are important in showing the truth one has in a reported number. For example, if someone asked me what the population of my county is, I would simply respond, "The population of Adama area is 1 million!" Is it exactly one million? "I don"t know! But I am quite sure that it will not be two". The problem comes when someone else was going to give me a $100 for every citizen of the county, I would have to get an exact count in that case. That count would have been 1,079,587 in this year. So you can see that in my statement that the population is 1 million, that there is only one significant digit. i.e, 1, and in the statement that the population is 1,079,587, there are seven significant digits. That means I am quite sure about the accuracy of the number up to the seventh digit. So, how do we differentiate the number of digits correct in 1,000,000 and 1,079,587? Well for that, one may use scientific notation. For our data we can have to signify the correct number of significant digits.
Springer eBooks, 2018
C HAPTER has shown that operations on floating-point numbers are naturally expressed in terms of integer or fixed-point operations on the significand and the exponent. For instance, to obtain the product of two floating-point numbers, one basically multiplies the significands and adds the exponents. However, obtaining the correct rounding of the result may require considerable design effort and the use of nonarithmetic primitives such as leading-zero counters and shifters. This chapter details the implementation of these algorithms in hardware, using digital logic. Describing in full detail all the possible hardware implementations of the needed integer arithmetic primitives is much beyond the scope of this book. The interested reader will find this information in the textbooks on the subject [345, 483, 187]. After an introduction to the context of hardware floating-point implementation in Section 8.1, we just review these primitives in Section 8.2, discuss their cost in terms of area and delay, and then focus on wiring them together in the rest of the chapter. We assume in this chapter that inputs and outputs are encoded according to the IEEE 754-2008 Standard for Floating-Point Arithmetic.
International Journal of Modern Education and Computer Science, 2017
This paper presents a new fault-tolerant architecture for floating-point multipliers in which the fault-tolerance capability is achieved at the cost of output precision reduction. In this approach, to achieve the faulttolerant floating-point multiplier, the hardware cost of the primary design is reduced by output precision reduction. Then, the appropriate redundancy is utilized to provide error detection/correction in such a way that the overall required hardware becomes almost the same as the primary multiplier. The proposed multiplier can tolerate a variety of permanent and transient faults regarding the acceptable reduced precisions in many applications. The implementation results reveal that the 17-bit and 14-bit mantissas are enough to obtain a floating-point multiplier with error detection or error correction, respectively, instead of the 23-bit mantissa in the IEEE-754 standardbased multiplier with a few percent area and power overheads.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.