The core concept of error is defined when comparing a true quantity to its approximation:
Error (E): This is the difference between the true value of a quantity (X) and its approximate
value (X').
Formula: Error = E = X - X'.
Building on the general definition of Error, specific measures are defined:
Absolute Error (Ea): This is the numerical difference between the true value (X) and its
approximate value (X'). It is also denoted by δX.
Formula: Ea = |X - X'| = δX.
The unit of the absolute error is the same as the unit of the exact or approximate values.
Absolute Accuracy (∆X) is a number that serves as an upper limit on the magnitude of the
absolute error (|X1 - X| ≤ ∆X) and is said to measure absolute accuracy. For a number X
rounded to N decimal places, the absolute accuracy ∆X is given by:
Formula: ∆X = 1/2 × 10⁻ᴺ.
Relative Error (Er): This is defined by dividing the absolute error by the true value.
Formula: Er = |X - X'| / |X| = Ea / True Value.
The relative error is independent of units.
Percentage Error (Ep): This is calculated by multiplying the relative error by 100.
Formula: Ep = 100 × Er = 100 × |X - X'| / |X|.
The percentage error is also independent of units.