Deadly Boring Math
The Mostest Bestestest FOSS Math Journal on the Internet[citation needed]
Paul's Online Math Notes
Khan Academy
Notice a mistake? Contact me at plupy44@gmail.com
How To Contribute
Rendered with Sitix
Mathematical formulas generated via MathJax.
Disclaimer (important!)
By Tyler Clarke in Calculus on 2025-7-15
Hello, everyone! Welcome back to Deadly Boring Math for yet another week. The semester is rapidly drawing to a close; we've just passed week 9, and final exams are just a few weeks away. It's gonna be fun...
Convolution integrals are integrals in the form `int_0^t f(t - tau)g(tau) d tau`. They are a common result of Variation of Parameters, and so we have a really useful general form: `int_0^t f(t - tau) g(tau) d tau = L(f(t)) L(g(t))`. This is obviously very powerful. For instance, when solving `y'' + y = g(t), y(0) = 0, y'(0) = 0`, we end up with `y = int_0^t sin(t - tau) g(tau) d tau`: solving this directly would be unpleasant, but we can just plug in the Laplace transforms, getting `frac 1 { 1 + s^2 } L(g(tau))`.
These are often used to solve equations in the form `ay'' + by' + cy = g(t)`. In these cases, we take the Laplace transform to get `(as^2 + bs + c)Y(s) - (as + b) y(0) - a y'(0) = G(s)`, and let `H(s) = frac 1 {as^2 + bs + c)}`: this allows us to rewrite as `Y(s) = H(s) ((as + b) y(0) + a y'(0)) + H(s) G(s)`. Taking the inverse Laplace transform of this is not necessarily simple: the first term will reduce to some constant times `H(s)`, so we can partial fraction decompose there, but the second term `H(s) G(s)` is going to be much harder. Fortunately, we can rewrite as a convolution integral! `Y(s) = L^{-1}(H(s)) ((as + b) y(0) + a y'(0)) + int_0^t h(t - tau) g(tau) d tau`. Finding `h(t - tau)` is easy, and we already have `g(t)`.
Note that the first term is the complementary solution, or free response, and the second term is the forced response.
There are quite a few situations where the forced response is much more significant than the free response. We analyze these situations with a transfer function: the ratio of the forced response to the input; conveniently always equal to `H(s)`. `H(s)` also has the property that, after being inverse-laplaced, it is the impulse response; the solution given the situation that `g(t) = delta (t)`.
We've already covered autonomous equations: A system is just two related autonomous equations. The critical idea is that they do not depend on time: the differential equations are timeless; time is only considered after solving.
An autonomous system in the form `vec x' = f(vec x)` has critical points wherever `vec x' = 0`. These critical points are classified in exactly the same ways we already know: they are stable, unstable, or asymptotically stable. A more rigorous definition than "stays close" or "goes away" is this: given a critical point `vec delta`, if there is some finite `epsilon` for which `|| vec x - vec delta || < epsilon` for all `t`, the critical point is stable and possibly asymptotically stable. If not, it's definitely unstable. If `|| vec x - vec delta ||` goes to 0, it's asymptotically stable.
Some nonlinear autonomous systems behave sorta linear near the critical points. Specifically, this is true for an autonomous system in the form `vec x' = A vec x + g(x)` where `g` is small relative to `x` close to the critical point; that is, `lim_{vec x -> [0, 0]} frac { | g(x) | } { || vec x || } = 0`. This is usually pretty easy to determine.
These almost linear systems can be, unsurprisingly, linearized! It's really as simple as dropping the `g(x)` term from the `vec x' = A vec x + g(vec x)` to get the linear `vec x' = A vec x`. This works because `g(vec x)` is small compared to `vec x` near the critical points, and can be treated as negligible.