In numerical analysis, the way numbers are represented and manipulated within a computer has a profound effect on the accuracy and reliability of computations. Unlike the continuous nature of real numbers in mathematics, computers work with finite precision, which leads to several types of errors.
Computers represent numbers in binary form, using fixed or floating-point formats. A floating-point number has a format like:
± mantissa × base^exponent.
For example, 3.45 × 10^2 is a floating-point representation in base-10. In computers, binary base (base-2) is used.
Important parameters include:
· Precision: The number of significant digits.
· Range: The range of magnitudes a number can represent.
Due to finite precision and round-off, various errors arise:
· Round-off Error: Arises from storing only a limited number of digits.
· Truncation Error: Occurs when an infinite process is approximated by a finite one (like using a few terms of a series).
· Absolute Error (AE): |True Value – Approximate Value|
· Relative Error (RE): |AE / True Value|
· Percentage Error: RE × 100%
Understanding these errors is essential for designing reliable numerical algorithms.
The general formula estimates how errors propagate through functions. For a function f(x) with an approximate value x + Δx:
f(x + Δx) ≈ f(x) + f’(x)·Δx
This first-order Taylor expansion helps estimate how an input error (Δx) affects the output.
Mathematical functions like sin(x), cos(x), and e^x can be represented by Taylor or Maclaurin series. The approximation involves truncating the series after a few terms:
e^x ≈ 1 + x + x²/2! + x³/3! + … + xⁿ/n!
The remainder term or truncation error is:
Rₙ(x) = f⁽ⁿ⁺¹⁾(ξ)·xⁿ⁺¹ / (n+1)!
where ξ lies between 0 and x. This provides a bound on the error and helps determine how many terms are needed for desired accuracy.
Algebraic equations involve polynomials (e.g., x³ – x – 2 = 0), while transcendental equations contain non-algebraic functions like sin(x), log(x), or e^x (e.g., x = cos(x)). Finding exact roots is often impossible, so numerical methods are used.
This is a bracketing method used when the function changes sign over an interval [a, b], i.e., f(a)·f(b) < 0.
Steps:
· Compute midpoint c = (a + b)/2.
· Replace the subinterval [a, b] with [a, c] or [c, b] depending on the sign change.
· Repeat until the interval is small enough.
Pros: Guaranteed convergence, simple.
Cons: Slow, linear convergence.
Given f(x) = 0, rearrange as x = g(x) and iterate:
xₙ₊₁ = g(xₙ)
Convergence Criteria:
· If |g’(x)| < 1, the method converges.
· It is important to choose the correct form of g(x).
Pros: Simple and fast for suitable functions.
Cons: May diverge if function is not well-behaved.
Combines the bisection approach with linear interpolation. Let x₁ and x₂ be such that f(x₁)·f(x₂) < 0.
Root approximation:
x = x₂ – [f(x₂)·(x₂ – x₁)] / [f(x₂) – f(x₁)]
Repeat the process while adjusting the interval.
Pros: Faster than bisection.
Cons: Can stagnate (converges slowly in some cases).
This powerful open method uses the function and its derivative:
xₙ₊₁ = xₙ – f(xₙ) / f’(xₙ)
Conditions:
· Function must be differentiable.
· Initial guess must be close to the actual root.
Pros: Very fast (quadratic convergence).
Cons: Fails if f’(x) is zero or near-zero.
To find roots of polynomials with complex roots, standard real methods may not suffice. Durand-Kerner and Bairstow’s methods are commonly used.
In Bairstow’s method, a quadratic factor is iteratively approximated and divided from the polynomial.
Graffe’s Method (Root Squaring Method): Transforms a polynomial so that the magnitudes of the roots are separated by squaring:
· Repeated squaring increases the difference in magnitude of roots.
· Used to approximate complex and multiple roots.
This measures how quickly an iterative method approaches the actual root.
For a method with root r, and sequence {xₙ}, the error eₙ = |xₙ – r|.
· Linear convergence: eₙ₊₁ ≈ C·eₙ
· Quadratic convergence: eₙ₊₁ ≈ C·(eₙ)²
· Super-linear: faster than linear, but not quadratic.
Newton-Raphson shows quadratic convergence under ideal conditions, while bisection has linear convergence.
Solving P(x) = a₀xⁿ + a₁xⁿ⁻¹ + … + aₙ = 0 involves:
· Using methods like synthetic division, Horner’s method, or factorization for lower degrees.
· Iterative numerical methods for higher degrees.
Polynomial root-finding is essential in various applications like control systems, signal processing, etc.
Students develop a working knowledge of how to apply numerical algorithms to solve real-world problems like root finding, with a clear grasp of error behavior and precision constraints.
Each method discussed can be implemented using programming languages like Python, C++, or MATLAB, encouraging students to translate mathematical logic into computer-executable code.
Example:
# Bisection Method in Python
def f(x): return x**3 – x – 2
def bisection(a, b, tol):
while (b – a)/2 > tol:
c = (a + b)/2
if f(c) == 0:
return c
elif f(a)*f(c) < 0:
b = c
else:
a = c
return (a + b)/2
Understanding how errors accumulate, and how to predict them using Taylor expansions and relative error formulas, is crucial for numerical reliability. Students learn when and why an algorithm may fail or give inaccurate results.
Numerical root-finding often supports statistical algorithms like regression and optimization, which are core to data science and machine learning. For instance, solving the least-squares problem involves iterative numerical solutions.
Each method is tied to practical problems:
· Engineering design optimization.
· Physics simulations (e.g., solving Kepler’s equation).
· Economics (equilibrium computation).
· Computer Graphics (intersection of curves).
By the end of this module, students should confidently:
· Understand the mathematical and computational foundations of numerical methods.
· Implement them on computers for solving complex real-world problems.
· Identify, predict, and mitigate numerical errors.
· Bridge numerical methods with statistical and analytical tasks in computational applications.