Prerequisites: Systems of linear equations (ch160), matrix row operations You will learn:
Gaussian elimination: row reduction to echelon form
Back substitution
Partial pivoting for numerical stability
LU decomposition connection
Environment: Python 3.x, numpy
# --- Gaussian Elimination with partial pivoting ---
import numpy as np
def gaussian_elimination(A, b):
"""
Solve Ax=b via Gaussian elimination with partial pivoting.
Args:
A: 2D array (n, n)
b: 1D array (n,)
Returns:
x: solution vector (n,)
"""
n = len(b)
# Augmented matrix
M = np.column_stack([A.astype(float), b.astype(float)])
# Forward elimination
for col in range(n):
# Partial pivoting: swap current row with row having max absolute value in this column
max_row = col + np.argmax(np.abs(M[col:, col]))
M[[col, max_row]] = M[[max_row, col]]
if abs(M[col, col]) < 1e-12:
raise ValueError(f"Matrix is singular at column {col}")
# Eliminate below
for row in range(col+1, n):
factor = M[row, col] / M[col, col]
M[row, col:] -= factor * M[col, col:]
# Back substitution
x = np.zeros(n)
for i in range(n-1, -1, -1):
x[i] = (M[i, -1] - np.dot(M[i, i+1:n], x[i+1:n])) / M[i, i]
return x
# Test
A = np.array([[2., 1., -1.],
[-3.,-1., 2.],
[-2., 1., 2.]])
b = np.array([8., -11., -3.])
x_gauss = gaussian_elimination(A, b)
x_numpy = np.linalg.solve(A, b)
print(f"Our solution: {x_gauss}")
print(f"NumPy solution: {x_numpy}")
print(f"Match: {np.allclose(x_gauss, x_numpy)}")
print(f"Residual: {np.linalg.norm(A @ x_gauss - b):.2e}")
# Show the elimination steps for a small system
print("\n--- Elimination trace ---")
A2 = np.array([[1.,2.,1.],[4.,7.,2.],[2.,1.,3.]])
b2 = np.array([1., 3., 4.])
M = np.column_stack([A2, b2])
print(f"Initial:\n{M}")
for col in range(3):
for row in range(col+1, 3):
factor = M[row, col] / M[col, col]
M[row, col:] -= factor * M[col, col:]
print(f"After eliminating col {col}:\n{M}")Our solution: [ 2. 3. -1.]
NumPy solution: [ 2. 3. -1.]
Match: True
Residual: 8.88e-16
--- Elimination trace ---
Initial:
[[1. 2. 1. 1.]
[4. 7. 2. 3.]
[2. 1. 3. 4.]]
After eliminating col 0:
[[ 1. 2. 1. 1.]
[ 0. -1. -2. -1.]
[ 0. -3. 1. 2.]]
After eliminating col 1:
[[ 1. 2. 1. 1.]
[ 0. -1. -2. -1.]
[ 0. 0. 7. 5.]]
After eliminating col 2:
[[ 1. 2. 1. 1.]
[ 0. -1. -2. -1.]
[ 0. 0. 7. 5.]]
4. Mathematical Formulation¶
Row operations that don't change the solution set:
1. Swap two rows: Rᵢ ↔ Rⱼ
2. Multiply row by nonzero scalar: Rᵢ ← k·Rᵢ
3. Add multiple of one row to another: Rᵢ ← Rᵢ + k·Rⱼ
Goal: transform [A|b] into [U|c] where U is upper triangular.
Then back-substitute from bottom row upward.
Partial pivoting: before eliminating column j, swap row j
with the row below it having the largest |entry| in column j.
This prevents division by small numbers → better numerical stability.
Complexity: O(n³) for the elimination, O(n²) for back substitution.7. Exercises¶
Easy 1. Trace through Gaussian elimination by hand on [[2,1],[4,3]]x = [5,9]. Show each row operation.
Easy 2. Why is partial pivoting necessary? Construct a 2×2 example where no pivoting fails numerically even though a solution exists.
Medium 1. Extend gaussian_elimination to handle rectangular systems (m equations, n unknowns) and return the particular solution + null space basis if infinite solutions exist.
Medium 2. Implement back_substitution(U, c) as a standalone function and verify it on 5 upper triangular systems.
Hard. Implement gauss_seidel(A, b, tol=1e-8) — an iterative solver for Ax=b. Compare convergence to direct Gaussian elimination for 50×50 diagonally dominant systems.
9. Chapter Summary & Connections¶
Gaussian elimination transforms Ax=b via row operations to Ux=c (upper triangular), then back-substitutes.
Partial pivoting (swap to bring the largest pivot to the diagonal) is essential for numerical stability (ch038 — Floating Point).
O(n³) — same complexity as matrix multiplication, not a coincidence: elimination IS a sequence of matrix multiplications by elementary matrices.
Forward connections:
In ch163 (LU Decomposition), we show that Gaussian elimination is equivalent to factorizing A = LU.