Deadly Boring Math
The Mostest Bestestest FOSS Math Journal on the Internet[citation needed]

News
Differential Equations Week 4 & 5
by Tyler Clarke on 2025-6-15
Multivariable Calculus Megathread
by Tyler Clarke on 2025-5-15
Differential Equations Exam 1 Review
by Tyler Clarke on 2025-6-6
Differential Equations Week 1
by Tyler Clarke on 2025-5-17
Differential Equations Week 8
by Tyler Clarke on 2025-7-7
Differential Equations Week 3
by Tyler Clarke on 2025-5-31
Differential Equations Week 2
by Tyler Clarke on 2025-5-25
Differential Equations Exam 2 Review
by Tyler Clarke on 2025-7-4
Differential Equations Week 6
by Tyler Clarke on 2025-6-23
Differential Quiz 1
by Tyler Clarke on 2025-5-21
Differential Quiz 5
by Tyler Clarke on 2025-7-16
Differential Equations Final Exam
by Tyler Clarke on 2025-7-27
Differential Quiz 2
by Tyler Clarke on 2025-6-2
Differential Quiz 4
by Tyler Clarke on 2025-6-25
Differential Equations Week 9
by Tyler Clarke on 2025-7-15
Differential Equations Week 10
by Tyler Clarke on 2025-7-21
Differential Equations Week 7
by Tyler Clarke on 2025-6-30
Physics Exam 3 Review
by Tyler Clarke on 2025-4-11
Math Gun pt. 1
by Tyler Clarke on 2025-5-2
Inductance Hell
by Tyler Clarke on 2025-4-5
Physics Final Exam Review
by Tyler Clarke on 2025-4-26
A Brief Introduction to AI Math
by Tyler Clarke on 2025-4-8

Paul's Online Math Notes
Khan Academy

Notice a mistake? Contact me at plupy44@gmail.com
How To Contribute
Rendered with Sitix
Mathematical formulas generated via MathJax.
Disclaimer (important!)

Differential Equations Week 10

By Tyler Clarke in Calculus on 2025-7-21

We're getting pretty close to the end! The final exam, on the 31st, is less than two weeks away. Light at the end of the tunnel...

Euler's Method

This is by far the simplest way to numerically calculate any first-order differential equation in the form `frac {dy} {dx} = f(x, y)`. It has the added benefit that it scales to essentially arbitrary accuracy, and can be performed trivially by computers. The idea is very simple: pick a value `h` to be your step size (smaller = more accurate), and take the sum of the differential equation at every step in your target range. VoilĂ , the solution! Mathematically: given `y(x)`, `y(x + h) = y(x) + h frac {dy} {dx} (x, y)` within some margin of error. This allows you to find the value of `y` at any arbitrary `x` given an initial value and a differential equation, without solving the differential equation!

To keep track of the process, Euler's method is usually used in a table like so:

`n` `x` `y` `frac {dy} {dx}`
`0` `x + 0h` `y(0) + h frac {dy} {dx} (x + 0h, y(x))` `frac {dy} {dx}`

With an added row for each step. This is tedious and not particularly interesting, so I'm not going to do an example.

Euler's method has two variants: the explicit, or forward, method we've covered, and the implicit, or backward, method. The implicit method is conceptually and practically more difficult, but is still useful: the explicit method will usually overestimate for a curve opening down, while the implicit method will underestimate. It is relatively stable, especially for large `n`.

Implicit Euler can find `y_{n + 1}` simply by solving the equation `y_{n + 1} - h y_{n + 1} (1 - frac {y_{n + 1}} {Y_0}) - y_n = 0`. This is extremely inconvenient. Note that, if we let `y_{n + 1} = q` and expand, we get `q^2 frac {h} {Y_0} + (1 - h)q - y_n = 0`, which is a solvable quadratic.

Luckily, 8.2 won't be on the exam.

Improved Euler and Runge-Katta

Euler goes further! We already know that, given `frac {dy} {dt} = f(y, t)`, `y_{n + 1} = y_n + h f(y_n, t_n)`: this is actually a specific case of `y_{n + 1} = y_n + int_{t_n}^{t_{n + 1}} f(y(t), t) dt`, using the approximation `f(y(t), t) = f(y_n, t_n)`. This integral equation can't be solved without an approximation, but `f(y_n, t_n)` is just one of several possibilities. It consistently overshoots or undershoots any nonstraight curve! A better way to do this is to assume that the average derivative over a step is close to the average of the derivative at the start and at the end: `frac {f(y_n, t_n) + f(y_{n+1}, t_{n+1})} 2`. Plugging in and integrating yields the improved `y_{n + 1} = y_n + h frac {f(t_n, y_n) + f(t_{n + 1}, y_{n + 1})} 2`.

There's one problem: we have to know `y_{n + 1}` to find `y_{n + 1}`. This isn't actually insurmountable if the functions involved are simple enough, but it's much easier to just assume that `y_{n + 1} = y_n + h f(y_n, t_n)`. Note that this is Euler's formula! Substituting this in gives us `y_{n + 1} = y_n + h frac {f(t_n, y_n) + f(t_{n + 1}, y_n + h f(t_n, y_n))} 2`. This Improved Euler formula is called Heun's Formula.

Euler and its improved variant are considered to be part of the Runge-Katta class of techniques for approximating the result of a differential equation. I'm not going to get into generalizing the Runge-Katta methods here; it's interesting, but not strictly relevant.