Deadly Boring Math
The Mostest Bestestest FOSS Math Journal on the Internet[citation needed]

News
Differential Equations Week 4 & 5
by Tyler Clarke on 2025-6-15
Multivariable Calculus Megathread
by Tyler Clarke on 2025-5-15
Differential Equations Exam 1 Review
by Tyler Clarke on 2025-6-6
Differential Equations Week 1
by Tyler Clarke on 2025-5-17
Differential Equations Week 3
by Tyler Clarke on 2025-5-31
Differential Equations Week 2
by Tyler Clarke on 2025-5-25
Differential Equations Exam 2 Review
by Tyler Clarke on 2025-7-4
Differential Equations Week 6
by Tyler Clarke on 2025-6-23
Differential Quiz 1
by Tyler Clarke on 2025-5-21
Differential Quiz 2
by Tyler Clarke on 2025-6-2
Differential Quiz 4
by Tyler Clarke on 2025-6-25
Differential Equations Week 7
by Tyler Clarke on 2025-6-30
Physics Exam 3 Review
by Tyler Clarke on 2025-4-11
Math Gun pt. 1
by Tyler Clarke on 2025-5-2
Inductance Hell
by Tyler Clarke on 2025-4-5
Physics Final Exam Review
by Tyler Clarke on 2025-4-26
A Brief Introduction to AI Math
by Tyler Clarke on 2025-4-8

Paul's Online Math Notes
Khan Academy

Notice a mistake? Contact me at plupy44@gmail.com
How To Contribute
Rendered with Sitix
Mathematical formulas generated via MathJax.
Disclaimer (important!)

Differential Equations Exam 2 Review

By Tyler Clarke in Calculus on 2025-7-4

Our second and final midterm exam is in less than a week, and it's gonna be a big one! The light at the end of the tunnel, it do approacheth. After this, we've only one quiz and then the final!

The exam covers everything from 4.1 to 5.5 inclusive - a total of 12 sections, each with a corresponding worksheet. I'm going to go through a few problems from each worksheet, and also some of the sample assessments: per Monday's lecture, the material on the exam will be pretty similar.

WS4.1.1: Classification

This one is a pretty simple classification task. We're given six differential equations, and asked to determine if they're linear and if they're homogeneous. Remember that "linear" just means it's in the form `a(t)y'' + b(t)y' + c(t)y = d(t)`, extending into arbitrarily high derivatives of `y`, and "homogeneous" simply means that `d(t) = 0`.

  1. `y'' - 2y' - 2y = 0`: The form checks out, so it's linear; `d(t) = 0`, so it's homogeneous.
  2. `2y'' + 2y' + 3y = 0`: Again, this is in the right form and `d(t) = 0`, so it is linear and homogeneous.
  3. `y''' - y'' = 6`: This is linear in the form `a(t)y''' + b(t)y'' + c(t)y' + d(t)y = e(t)` (`c(t) = d(t) = 0`), but `e(t) != 0`, so it's linear nonhomogeneous.
  4. `y'' - 2y' + 2y = e^t tan(t)`: Linear, nonhomogeneous.
  5. `y'y'' = 4x`: This one actually isn't linear! Multiplying `y^(n)` terms together makes it nonlinear by default.
  6. `2y'' = 3y^2`: You might think this is linear homogeneous, because it can be rewritten as `2y'' - 3y^2 = 0`, but in fact the `y^2` term makes this nonlinear.

WS4.2.1: Intervals of Existence

There are a couple problems under 4.2.1, and they're all worth doing, but I'm only going to write down the first one here. We're given `t(t - 4)y'' + 3ty' + 4y = 2` where `y(3) = 0, y'(3) = -1`, and asked to find the interval on which the IVP is guaranteed to have a solution.

To find the interval, we need an equation in the form `y'' + a(t)y' + b(t)y = c(t)`: to get this, we divide by `t(t - 4)`, to get `y'' + frac 3 {t - 4} y' + frac 4 {t(t - 4)} y = frac 2 {t(t - 4)}`. It's pretty obvious that `a(t) = frac 3 {t - 4}` and `b(t) = frac 4 {t(t - 4)}`. The interval of existence is going to be the intersection of the intervals of existence of `a(t)` and `b(t)`. It's pretty obvious that `a(t)` exists where `t - 4 != 0`: `(-oo, 4) cup (4, oo)`. `b(t)` exists where `t(t - 4) != 0`, so `(-oo, 0) cup (0, 4) cup (4, oo)`. The intersection of these is just `(-oo, 0) cup (0, 4) cup (4, oo)`. Because the initial `t` value is `t = 3`, we're in the interval `(0, 4)`.

WS4.2.2: A Quick Verification

We're so far back in time that we haven't yet seen undetermined coefficients, which is kinda weird. This problem is an undetermined coefficients problem. We're given `y = c_1 e^x + c_2 e^{-x}`, `y'' - y = 0`, `y(0) = 0`, and `y'(0) = 1`, and asked to find `c_1` and `c_2`.

The first thing to do is take the derivatives. This is pretty easy: `y' = c_1 e^x - c_2 e^{-x}`, and `y'' = c_1 e^x + c_2 e^{-x}`. Now we substitute! `y'' - y = 0` works out to `c_1 e^x + c_2 e^{-x} - c_1 e^x - c_2 e^{-x} = 0`, which is obviously true for any `c_1` and `c_2`. `y(0) = 0` works out to `c_1 + c_2 = 0`, so we know `c_1 = -c_2`. And finally, `y'(0) = 1` works out to `c_1 - c_2 = 1`, which tells us that `c_1 = frac 1 2, c_2 = - frac 1 2`. Substituting this back gives us `y = frac 1 2 e^x - frac 1 2 e^{-x}`. Easy!

WS4.2.3: Linear Independence (Brief)

This is a really easy one. We're asked if `y_1 = x, y_2 = x^2, y_3 = 4x - 3x^2` is a linearly independent set of solutions. They clearly aren't; `y_3 = 4y_1 - 3y_2`.

WS4.3.3: Some Solving

This is no big trick to anyone who's been paying attention. We're given `4y'' + 4y' + 17y = 0` where `y(0) = -1` and `y'(0) = 2`, and asked to solve the IVP.

Remember that we can represent this as `4r^2 + 4r + 17 = 0`, find the roots of that, and then substitute them into one of a few nice general forms. The roots are `-frac 1 2 +- 2i`, so we're using the imaginary one (derived from Euler's identity): `y = e^{- frac 1 2 t}(c_1 cos(2t) + c_2 sin(2t))`. Simple enough.

Now we just need to solve for `c_1` and `c_2`. Unfortunately, this means taking the derivative: `y'' = e^{- frac 1 2 t}(c_1 (-2 sin(2t) - frac 1 2 cos(2t)) + c_2 (2 cos(2t) - frac 1 2 sin(2t)))`. Substituting the IVP gives us `-1 = c_1` and `2 = - frac 1 2 c_1 + 2 c_2`. Solve to get `frac 3 4 = c_2`. Thus, our solution is `y = e^{- frac 1 2 t}(- cos(2t) + frac 3 4 sin(2t))`.

WS4.3.5: A Tall One

We're asked to solve `y'''' + 2y'' + y = 0`. As it turns out, the polynomial trick actually carries over nicely here: we have the auxiliary polynomial `r^4 + 2r^2 + 1 = 0`. Finding the roots for this is actually pretty easy: we factor down to `(r^2 + 1)^2 = 0`, so `r = +- i`. Note that this is actually a repeated roots case! Because the highest power is 4 rather than 2, there are actually 4 solutions: `i, i, -i, -i`. Weird, but true.

Applying the normal rule for imaginary along with the normal rule for repeated, we get `c_1 cos(t) + c_2 sin(t) + c_3 t cos(t) + c_4 t sin(t)`.

WS4.3.6: Eulering

Last one from WS4.3, I promise. We're given `t^2 y'' - 2ty' - 4y = 0`. This is obviously a Cauchy-Euler equation! My favorite way to solve these is the simplest one: use the formula from Paul's Notes. We find the roots by solving `r(r - 1) - 2r - 4 = 0`: this simplifies to `r^2 - 3r - 4 = 0`, so `r = -1, 4`. These being real and distinct, we can plug them into the nice form `y = c_1 x^{lambda_1} + c_2 x^{lambda_2}`: `y = c_1 x^{-1} + c_2 x^4`.

WS4.4.

TODO: something from worksheet 4.4 (I hate this worksheet)

If you're reading this, I didn't do worksheet 4.4

WS4.5.1: Undetermined Coefficients

All of 4.5 are fairly easy UC problems. I'm just going to do the first one: `y'' - 2y' - 3y = 3e^{2t}` where `y(0) = 0, y'(0) = 1`. Because the right side is in the form `Ae^{2t}`, we'll use that as our guess: `y = Ae^{2t}`, `y' = 2Ae^{2t}`, `y'' = 4Ae^{2t}`. Inserting these into the equation gives us `4Ae^{2t} - 4Ae^{2t} - 3Ae^{2t} = 3e^{2t}`. This pretty obviously simplifies down to `A = -1`. Hence, our particular solution is `y_p = -e^{2t}`. We still need a general solution; this is the solution to the homogeneous part, `y'' - 2y' - 3y = 0`. That's pretty easy: it's `y_g = c_1 e^{-t} + c_2 e^{3t}`. Our solution is the sum: `y = c_1 e^{-t} + c_2 e^{3t} - e^{2t}`.

Solving the IVP requires us to find `y'`. This is, fortunately, easy: `y' = -c_1 e^{-t} + 3 c_2 e^{3t} - 2e^{2t}`. Substituting the IVP gives us the system `0 = c_1 + c_2 - 1, 1 = -c_1 + 3 c_2 - 2`. `c_1 = frac 1 4`, `c_2 = frac 3 4`. Hence, we have the solution `y = frac 1 4 e^{-t} + frac 3 4 e^{3t} - e^{2t}`.

Note: the worksheet answer is actually different here. I'm not sure what the disconnect is; both solutions substitute correctly, so they appear to be equally valid.

WS4.6.2

We're given a spring-mass system with spring constant `3 frac N m`, (hooray for metric units), and a mass of `2 kg`. It's suspended in a viscous fluid that resists motion by `-v` (where `v` is velocity of the mass). The system is being driven downwards by a force `12 cos(3t) - 8 sin(3t)`. We need to find the steady-state response - the solution that doesn't converge or diverge.

Setting this up as a second-order linear differential equation is pretty simple. Operating in terms of `p`, the offset from rest which increases down, we have `p'' = 6 cos(3t) - 4 sin(3t) - frac 1 2 p' - frac 3 2 p` (note that the force is divided by mass: `p''` is acceleration, not force). This rewrites nicely as `p'' + frac 1 2 p' + frac 3 2 p = 6 cos(3t) - 4 sin(3t)`. We need to first find the particular solution: this is usually going to be at least part of the steady-state response. The general solution might be part of it: if the general solution has some `e^{At}` term where `A != 0`, it will converge or diverge, and so we ignore it.

Because the right side is in the form `A cos(3t) + B sin(3t)`, we'll use that for undetermined coefficients. `p_p = A cos(3t) + B sin(3t)`, `p_p' = -3A sin(3t) + 3 B cos(3t)`, `p_p'' = -9 A cos(3t) - 9 B sin(3t)`. Substituting these into the equation gives us `-9 A cos(3t) - 9 B sin(3t) + -frac 3 2 A sin(3t) + frac 3 2 B cos(3t) + 9A cos(3t) + 9B sin(3t)`. Yikes. Fortunately, this simplifies pretty far: `-3/2 A sin(3t) + 3/2 B cos(3t) = 6 cos(3t) - 4 sin(3t)`. It's easy to find `A` and `B` from this; `A = frac 8 3, B = 4`. Hence: `p_p = frac 8 3 cos(3t) + 4 sin(3t)`.

We don't have to go past the real part of the solution to the homogeneous equation: it's `-frac 1 4`, meaning the general solution will be a multiple of `e^{-frac 1 4}`, meaning it will vanish at infinity. Our steady state response is just the particular solution `p = frac 8 3 cos(3t) + 4 sin(3t)`.

WS4.7.1.a: Variation of Parameters: The Force Awakens

Variation of parameters! Variation of parameters! Variation of parameters! Multiple cheers for variation of parameters! This problem is a pretty classic variation of parameters case: we're given `x' = [[-1, -1], [0, 1]] x + [18, 3t]`, and we need to solve for `x`.

First, let's find the general solution to the homogeneous case `x' = [[-1, -1], [0, 1]]`. This is done pretty easily with the eigenvector/eigenvalue method: `x_g = c_1 [e^{-t}, 0] + c_2 [-e^{t}, 2e^{t}]`. This means the fundamental set of solutions is `[[e^{-t}, -e^t], [0, 2e^t]]`. The Wronskian of that fundamental set is important: calculating it is, fortunately, not very hard; `W = 2`.

The particular solution for a situation like this is the fundamental matrix times the integral of the inverse of the fundamental matrix times `f`, where `f` is the nonhomogeneous term (`[18, 3t]` in our case). Finding the inverse matrix to use the form `x = A int A^{-1} f dt` is pretty hard, but (linear) algebra saves the day: if we let `u = A^{-1}x`, and then if we take the derivative of both sides, we get `u' = A^{-1} f`, and multiply the left side of both terms by `A`, we get `A u' = f`. This is easy to solve with Gaussian elimination! Once we have `u`, we can get `x` with the formula `x = A u`.

Let's do it. The augmented matrix we're solving works out to `[[e^{-t}, -e^t | 18], [0, 2e^t | 3t]]`, so our result is `u' = [18e^t + frac 3 2 t e^t, frac 3 2 t e^{-t}]`. Integrating yields `u = [frac 33 2 e^t + frac 3 2 t e^t, - frac 3 2 e^{-t} - frac 3 2 t e^{-t}]` (assuming `C = 0`, which works). To get the final result, we need to multiply this by the fundamental matrix: `x_p = [18 + 3t, -3 - 3t]`.

And we're done! Adding them all together yields `x = x_g + x_p = c_1 [e^{-t}, 0] + c_2 [-e^{t}, 2e^{t}] + [18 + 3t, -3 - 3t]`. Easy! Ish!

WS4.7.2.b: Variation of Parameters: Generalizin', always generalizin'

Hooray for even more variation of parameters! This technique is one hell of a drug. This time, we're given a second-order linear differential equation `y'' + 2y' + y = 3e^{-t}`. We could solve this by converting it into a system of two first-order ODEs and using the method above, but there's a quicker and easier substitution: given fundamental solutions `y_1` and `y_2`, and the Wronskian of the fundamental solution set `W`, and the nonhomogeneous term `g`, the particular solution is `y_p = y_2 int frac {y_1 g} W dt - y_1 int frac {y_2 g} W dt`.

We find the fundamental solution set pretty easily: the normal tricks give us `y_g = c_1 e^{-t} + c_2 t e^{-t}`, so our solutions to `[y_g, y_g']` are `y_1 = [e^{-t}, -e^{-t}]` and `y_2 = [t e^{-t}, - t e^{-t} - e^{-t}]`. The Wronskian of `[y_1, y_2]` is `- e^{-2t}`. `g(t) = 3 e^{-t}`. Substituting into the general form gives us `y_p = t e^{-t} int frac {3e^{-2t}} {e^{-2t}} dt - e^{-t} int frac {3 t e^{-2t}} {e^{-2t}} dt`. Simplify to get `y_p = t e^{-t} int 3 dt - e^{-t} int 3 t dt`, and integrate to `y_p = 3 t^2 e^{-t} - frac 3 2 t^2 e^{-t} = frac 3 2 t^2 e^{-t}`.

Adding it all together, we get `y = c_1 e^{-t} + c_2 t e^{-t} + frac 3 2 t^2 e^{-t}`. Pretty straightforward.

WS5.1.1: A Little Laplace (Briefly)

Ahhh, Laplace, my favorite transform... In this problem, we're given the piecewise function `f(t)` defined with `f(t) = 0, 0 <= t <= 1; f(t) = 1, 1 < t <= 2; f(t) = 0, t > 2`, and asked to find the Laplace transform. This is pretty easy: for a piecewise function, the Laplace transform breaks up into several simple integrals, like so: `L(f(t)) = int_0^1 e^{-st} * 0 dt + int_1^2 e^{-st} * 1 dt + int_2^{oo} e^{-st} * 0 dt`. The first and third terms both evaluate to 0, so the only interesting part is the middle: `L(f(t)) = int_1^2 e^{-st} dt`. This is very easy to evaluate: we end up with `1/s (e^{-s} - e^{-2s})`.

WS5.2.1: A Little More Laplace (Briefly)

Even more Laplace! This time, `f(t) = e^{-2t} sin(3t)`. This is pretty easy if you remember that, fundamentally, `L(e^{at} f(t)) = F(s - c)`. The identity `L(sin(at)) = frac a {a^2 + s^2}` works nicely for the `sin` part, and we combine these ideas to get `F(s) = frac a {a^2 + (s + 2)^2}`

WS5.2.4: Transforming Equations with Laplace

A pretty important trick in our arsenal, now that we've got the Laplace transform, is using it to turn differential equations into algebra problems. This question asks us to do exactly that: given a differential equation, we use the Laplace transform to turn it into an algebraic problem and solve for `L(y)` - we don't have to do the tedious final step, yet, which would be inverting it to get `y`.

The equation is `9y'' + 12y' + 4y = 0, y(0) = 2, y'(0) = -1`. Taking the Laplace transform is as easy as applying it individually to each term: `9 L(y'') + 12 L(y') + 4 L(y) = L(0)`. `L(f')` for any `f` is always `s L(f) - f(0)`, so we have `L(y') = s L(y) - y(0)` and `L(y'') = s L(y') - y'(0)`. We can expand the `L(y')` term to get `L(y'') = s (s L(y) - y(0)) - y'(0) = s^2 L(y) - s y(0) - y'(0)`.

Substituting these knowns into the equation gives us `9 s^2 L(y) - 9 s y(0) - 9 y'(0) + 12 s L(y) - 12 y(0) + 4 L(y) = 0`. We want something in the form `L(y) = f(s)`, so we group a bit: `L(y)(9 s^2 + 12 s + 4) = 9 s y(0) + 9 y'(0) + 12 y(0)`. Substituting the IVs gives us `L(y)(9 s^2 + 12 s + 4) = 18 s + 15`, and rearranging yields `L(y) = frac {18s + 15} {9 s^2 + 12 s + 4}`.

WS5.3.1: Everybody was Laplace Invertin', doo doo doo doo doo...

As it turns out, it's somewhere around the 200th line of mathjax-augmented HTML that my naming conventions start getting really stupid.

We're given a pretty simple problem: we need to find the inverse Laplace transform of `frac 2 {s^2 + 3s - 4}`. Why do I call this simple? Because finding inverse Laplace transforms organically is really hard - sufficiently hard, in fact, that the best way to do it is just to pattern match against known transforms, which is really pretty easy. In this case, we need to first factorize the denominator, then apply partial fraction decomposition; we'll end up with a linear combination of a few well-known forms that can be inverted easily.

Factorizing the denominator is simple: `s^2 + 3s - 4 = (s + 4)(s - 1)`. This is an easy case of PFD: our substitution will be `frac A {s + 4} + frac B {s - 1}`. Doing a bit of algebra gives us `frac 2 {(s + 4)(s - 1)} = frac {As - A + Bs + 4B} {(s + 4)(s - 1)}`, which groups and simplifies to `(A + B)s + (4B - A) = 2`. Hence, `A + B = 0` and `4B - A = 2`, so `A = - frac 2 5, B = frac 2 5`.

Now we're doing the inverse Laplace of a much nicer function: `L(f) = - frac 2 5 frac 1 {s + 4} + frac 2 5 frac 1 {s - 1}`. This is nice because `L(e^{at}) = frac 1 {s - a}`: if we apply that in reverse, we can pretty easily find `f = - frac 2 5 e^{-4t} + frac 2 5 e^{t}`.

WS5.4.1: Scooby Dooby Laplace!

Because we're (finally!) solving a problem. Get it?

We're given `y'' - 4y' - 12y = 0, y(0) = 8, y'(0) = 0`. Using the formulae from WS5.2.4, we take the Laplace transform of both sides: `s^2 L(y) - s y(0) - y'(0) - 4s L(y) + 4 y(0) - 12L(y) = 0`. Group for `L(y)` and substitute the IVs to get `L(y)(s^2 - 4s - 12) = 8s - 32`, and flip to `L(y) = frac {8s - 32} {s^2 - 4s - 12}`.

Now we need to solve. Step one is, as always, factorize and simplify: `L(y) = 8 frac {s - 4} {(s + 2)(s - 6)}`. Next, we need to PFD: given the PFD substitution `8 frac {s - 4} {(s + 2)(s - 6)} = frac A {s + 2} + frac B {s - 6}`, we get `8s - 32 = As - 6A + Bs + 2B`. Grouping once more: `8s - 32 = (A + B)s + (2B - 6A)`. Thus, `A + B = 8` and `2B - 6A = -32`. These solve handily to `A = 6` and `B = 2` (yes, I did use Gaussian elimination for that). This turns our problem into `L(y) = frac 6 {s + 2} + frac 2 {s - 6}`.

We already know that `L(e^{at}) = frac 1 {s - a}`. Both of these terms are in the perfect form! A quick legerdemain, and we have `y = 6e^{-2t} + 2e^{6t}`. Yay!

WS5.5.6: Periodic!

WS5.5 mostly rehashes stuff we've already done, but the problems are harder. I highly recommend going through them. This problem in particular is the only one I considered surprising enough to merit inclusion. We're given a periodic function `f(t)`, defined as `f(t) = 1, 0 <= t < 1; f(t) = -1, 1 <= t < 2` with period 2. This simply means that it repeats: `f(t + 2) = f(t)` for any given `t`.

The best way to deal with the Laplace transform of this is with the window function `f_T`, which is defined to be equal to `f` over a single period `0 <= t <= 2`, and `0` otherwise. The Laplace transform of our periodic function `f(t)` can then be found with the formula `L(f(t)) = frac L(f_T(t)) {1 - e^{-2s}}`. The Laplace transform of the window function will work out to simply `int_0^2 e^{-st} f(t) dt`. That works out to `int_0^1 e^{-st} dt - int_1^2 e^{-st} dt`. Evaluating this gives us `L(f(t)) = frac 1 s frac {-2e^{-s} + 1 + e^{-2s}} {1 - e^{-2s}}`. Unpleasant, but not too bad!

Final notes

This ran long. It ran really long. I didn't even include any of the promised supplemental-assessment problems and it still ran really long. The midterm is on June 9th during the normal lecture time. Don't be late! I'll be wearing the long-missing balloon hat (marching around in 90f weather wearing that menace got unreasonable, fast), and if this helped you, you should wear one too! It would be pretty funny.

Sayonara, and good luck!