Deadly Boring Math
The Mostest Bestestest FOSS Math Journal on the Internet[citation needed]

News
Differential Equations Week 4 & 5
by Tyler Clarke on 2025-6-15
Multivariable Calculus Megathread
by Tyler Clarke on 2025-5-15
Differential Equations Exam 1 Review
by Tyler Clarke on 2025-6-6
Differential Equations Week 1
by Tyler Clarke on 2025-5-17
Differential Equations Week 8
by Tyler Clarke on 2025-7-7
Differential Equations Week 3
by Tyler Clarke on 2025-5-31
Differential Equations Week 2
by Tyler Clarke on 2025-5-25
Differential Equations Exam 2 Review
by Tyler Clarke on 2025-7-4
Differential Equations Week 6
by Tyler Clarke on 2025-6-23
Differential Quiz 1
by Tyler Clarke on 2025-5-21
Differential Quiz 5
by Tyler Clarke on 2025-7-16
Differential Equations Final Exam
by Tyler Clarke on 2025-7-27
Differential Quiz 2
by Tyler Clarke on 2025-6-2
Differential Quiz 4
by Tyler Clarke on 2025-6-25
Differential Equations Week 9
by Tyler Clarke on 2025-7-15
Differential Equations Week 10
by Tyler Clarke on 2025-7-21
Differential Equations Week 7
by Tyler Clarke on 2025-6-30
Physics Exam 3 Review
by Tyler Clarke on 2025-4-11
Math Gun pt. 1
by Tyler Clarke on 2025-5-2
Inductance Hell
by Tyler Clarke on 2025-4-5
Physics Final Exam Review
by Tyler Clarke on 2025-4-26
A Brief Introduction to AI Math
by Tyler Clarke on 2025-4-8

Paul's Online Math Notes
Khan Academy

Notice a mistake? Contact me at plupy44@gmail.com
How To Contribute
Rendered with Sitix
Mathematical formulas generated via MathJax.
Disclaimer (important!)

Differential Equations Exam 1 Review

By Tyler Clarke in Calculus on 2025-6-6

This post is part of a series. You can read the next post here.

Hello once more, Internet! Our little Diffy Q adventuring party has passed the first two random encounters, and Rivendell is close in sight - but a ferocious band of orcs lies in the way; our first midterm rapidly approaches. It's going to be on June 9th in the normal lecture room and normal time; don't be late!

There's quite a bit of material (everything we've learned so far!), and I can't possibly hope to cover everything in exhaustive detail, but this review should go through all of the major points and at least give you some idea what you need to study further. Note that there won't be any material from this week; the highest section on the exam is #3.5.

Note: the sample assessments are not sufficient to study! They contain some important ommissions (2.7 especially) for which we'll have to turn to the textbook and worksheets.

This is going to be structured a bit differently from the quiz reviews: major sections will be contiguous spans of densely related content, and there will be at least two or three questions covered in each major section. I'll boldface the question sources for some semblance of readability.

2.1 -> 2.3: Simple Solutions and Modeling

Note: while some material from chapter 1 is on the exam, it's frankly too basic to merit inclusion here.

This first chunk of chapter 2 talks primarily about the basic ways to solve ODEs: the variable-separable method and the linear integrating factor method. In brief, variable-separable method allows you to quickly solve any equation that can be rewritten in the form `f(y) dy = q(t) dt`, simply by integrating both sides. Linear integrating factors allow you to quickly solve any equation that can be rewritten in the form `y' + p(t) y = q(t)` by taking `mu = e^{int p(t) dt}` and rewriting as `frac d {dt} (mu y) = mu q(t)`. Integrating both sides with respect to `t` will get you an implicit solution with little trouble. This chunk also introduces modeling with differential equations: for instance, analyzing fluid mixing.

Let's do some problems. WS2.1.3 isn't very hard, but nicely illustrates an important concept in this course: exponent separation. We're given `frac {dy} {dt} = e^{y + t}, y(0) = 0`: if you remember exponent laws, you know this can be separated to get `frac {dy} {dt} = e^y e^t`, which can then be rearranged to get `e^{-y} dy = e^t dt`. Integrating both sides gives us a relatively trivial solution `-e^{-y} = e^t + C`. We can actually find `C` because we have an initial value `y(0) = 0`: substituting in `0` for both `y` and `t` gives us the equation `-e^0 = e^0 + C`, or `-1 = 1 + C`, or `C = -2`. Hence, our final answer is `-e^{-y} = e^t - 2`.

WS2.2.3 asks us to solve an initial-value problem: `t frac {dy} {dx} + (t + 1)y = t, y(ln 2) = 1` for `t > 0`. This equation is an LDE (Linear Differential Equation): if we divide the whole thing by `t`, we get `frac {dy} {dx} + frac {t + 1} {t} y = 1`, which is in exactly the right form with `p(t) = frac {t + 1} t` and `q(t) = 1`. The integrating factor `mu = e^{int frac {t + 1} t dt}`, which works out to `e^{t + ln|t| + C}`. It's much more convenient to rewrite this using exponent rules, which gives us `Cte^t` (note: because `C` is an arbitrary constant, we can redefine it as `C = e^C_1`, which makes this prettier) we can rewrite this in the normal IF form to get `frac d {dt} (y t C e^t) = Cte^t`. Dividing both sides by `C` and integrating both sides with respect to `t` gives us `y t e^t = te^t - e^t + C`. We can do a lil algebra to get `y t = t - 1 + Ce^{-t}`. Finally, we can substitute the initial value to get `ln 2 = ln 2 - 1 + Ce^{-ln2}`, and do some algebra to get `2 = C`. Hence: `y t = t - 2 + 2e^{-t}`.

We can use these basic techniques to solve some real-world problems. For instance, WS2.3.3 gives us a relationship `frac {dp} {dt} = rp` describing the behavior of a population of bacteria, where `p` is population, `t` is time, and `r` is an arbitrary rate constant. We need to determine the rate constant knowing that `p` doubles in 12 days, and solve with an initial value `p(0) = 200` to find the population at 18 days.

Step one is, of course, to solve. Fortunately, it's variable-separable: we can rewrite as `frac 1 p dp = r dt`, and integrate to get `ln|p| = rt + C`. Raising `e` to both sides gives us `p = Ce^{rt}` - because `C` is an arbitrary constant, we can move it out safely. The first part of the problem is tricky: let's say `p_0` is our unknown starting population, `t_0` is our unknown starting time, and `p_0 = Ce^{rt_0}`. The constraint is that `2p_0 = Ce^{rt_0 + 12r}`, which separates out nicely to `2p_0 = Ce^{rt_0}e^{12r}`. Because we know `p_0 = Ce^{rt_0}`, we can divide that from both sides to get `2 = e^{12r}`, which solves to `r = frac {ln(2)} {12}`. Knowing this, we can substitute for the IVP: `200 = Ce^{frac {ln(2)} {12} * 0}`, which reduces really nicely to `C = 200`. That's the last unknown! Now we can find the population at `t=18`: `p(18) = 200e^{frac {ln(2)} {12} 18}`, `p(18) = 200 * 2^{frac {3} {2}}`, or `p(18) = 400sqrt(2)`.

If you want a much deeper Deadly Boring look at the concepts discussed here, check out the first few Differential Equations posts under Calculus.

Briefly: 2.4

This one kinda stands alone. The main topic is existence and uniqueness. This is simplest for an LDE: given an equation `frac {dy} {dt} + p(t)y = q(t)`, there is guaranteed to be a unique solution in intervals where `p(t)` and `q(t)` are defined. For nonlinear equations, it's only possible to find the interval of existence by solving it, but you can find whether or not a solution is guaranteed to exist by checking that, for a nonlinear ODE `y' = f(t, y)`, both `frac {df} {dy}` and `f` are continuous at the initial point. Using those intervals, you can also find the maximum possible bounds for an actual interval of existence: the interval of existence of your IVP is guaranteed to fall somewhere within those bounds, although it's not possible to say where.

WS2.4.1 (b) asks us to find the interval of existence of `y' + tan(t) y = sin(t), y(pi) = 0`. This is clearly an LDE with `p(t) = tan(t)` and `q(t) = sin(t)`: because `sin(t)` is defined for all `t`, we can ignore it; `tan(t)` is only defined when `cos(t)` is nonzero. `cos(t)` is zero when `t = frac {pi} 2` and `t = frac {3pi} 2`, so there must be a solution between those: hence, our interval of existence is `frac {pi} 2 < t < frac {3pi} 2`. Note that it might exist elsewhere, even at `frac {pi} 2` and `frac {3pi} 2`, but we can't make any guarantees about that.

SA17 asks us to determine the interval of existence of `frac {dy} {dt} = e^{y + t}, y(0) = 0`. This is not linear, so we won't be able to find an exact interval of existence. If we let `f = e^{y + t}`, and thus `frac {df} {dy} = e^{y + t}` (note that `f` is its own derivative!). This exists for every possible value of `t`, so we can't get any more specific than `-oo < t < oo`. We can get a better answer by variable-separating and solving: `e^{-y} dy = e^t dt`, so `-e^{-y} = e^t + C`, so `y = -ln(-e^t + C)` (no, I did not miss a sign here - because `C` is an arbitrary constant, it eats the negative). `ln(x)` is only defined for `x > 0`, so this is only defined for `C - e^t > 0`. Because this is an IVP, we can actually find a solution for `C`: `0 = -ln|C - 1|`, or `1 = C - 1`, `C = 2`. Thus, `2 - e^t > 0`, `e^t < 2`, `t < ln(2)`. Our exact interval of existence, then, is `-oo < t < ln(2)`.

2.7: Substitutions

As there is no worksheet 2.5 from which to draw questions, I've decided to skip it here. This doesn't mean it won't be on the test; make sure to learn the material! 2.6 is unfortunately not included on the test, which is a shame, because it's really cool. 2.7 is, as the title suggests, mostly concerned with substitutions: essentially, some types of equations can be made much simpler by defining some invertible function `v(x, y)` and finding `y(v, x)` to substitute into the equation. It sounds fancy and complicated, but it's really quite simple. The simplest substitutions are homogeneous substitutions, where `v = frac y x`, and Bernoulli substitutions, which are a bit more situational.

WS2.7.1 is a nice example of homogeneous substitution. Given a differential equation in the form `f(x, y) + g(x, y) frac {dy} {dx} = 0` where `f` and `g` are both homogeneous of the same degree (in brief: a function is homogeneous of degree `n` if `f(x lambda, y lambda) = lambda^n f(x, y)`), we define the substitution `v = frac y x`, and accordingly `y = vx`. We need to solve `frac {dy} {dx} = frac {x + 3y} {3x + y}`, which is homogeneous of degree 1 (if you don't believe me, do the algebra! It can be rewritten in homogeneous form pretty easily.) Substituting `v` for `y` gives us `frac {dy} {dx} = frac {x + 3vx} {3x + vx}`. We still have a y term - fortunately, we can do some magic to find that `frac {dy} {dx} = v + x frac {dv} {dx}` by product rule, and substitute this. `v + x frac {dv} {dx} = frac {x + 3vx} {3x + vx}`. We can simplify to `v + x frac {dv} {dx} = frac {3v + 1} {3 + v}`. We have to perform a bit of legerdemain to turn this into something that can be solved:

  1. `x frac {dv} {dx} = frac {3v + 1} {3 + v} - v`
  2. `x frac {dv} {dx} = frac {3v + 1 - 3v - v^2} {3 + v}`
  3. `x frac {dv} {dx} = frac {1 - v^2} {3 + v}`
  4. `x frac {dv} {dx} = frac {(1 - v)(1 + v)} {3 + v}`
  5. `frac {3 + v} {(1 - v)(1 + v)} frac {dv} {dx} = frac 1 x`
  6. `frac {3 + v} {(1 - v)(1 + v)} dv = frac 1 x dx`

We variable-separated! Integrating requires some partial fraction decomposition. If you aren't familiar with PFD, you should practice it; I'm not going to cover it here. This becomes `frac 1 {1 + v} + frac 2 {1 - v} dv = frac 1 x dx`. We can integrate this to get `ln|1 + v| - 2ln|1 - v| = ln|x| + C`. Raising `e` to both sides and simplifying gives us `Cx = frac {1 + v} {(1 - v)^2}`. Finally, we resubstitute: `Cx = frac {1 + frac y x} {(1 - frac y x)^2}`. This is only an implicit solution, but it's Good Enough ™; I really, really don't want to find an explicit solution.

There is another type of substitution we'll encounter: the Bernoulli substitution. A Bernoulli differential equation is any differential equation in the form `frac {dy} {dx} + p(x)y = q(x)y^n` (note that LDEs are a special case of this when `n=0`). In these cases, you have to first divide the entire equation by `y^n`, then use the substitution `v = y^{1 - n}` to simplify. WS2.7.2 contains a nice example of this: we're given `frac {dy} {dt} - frac y t = - frac {y^2} {t^2}` (for `t > 0`). In this case, `p(t)` is clearly `-frac 1 t`, and `q(t)` is clearly `- frac {1} {t^2}`. `n` is `2`. Dividing the whole equation by `-y^2` yields `-y^{-2} frac {dy} {dt} + frac 1 {ty} = frac {1} {t^2}`. We use the substitution `v = y^{-1}`, which conveniently gives us `frac {dv} {dt} = - frac {dy} {dt} y^{-2}`. Ain't that convenient?

Plugging in these substitutions gives us the very nice `frac {dv} {dt} + frac v t = frac {1} {t^2}`. It's linear, folks! I won't bore you with the details of the linear solution; our answer is `v = frac {ln(Ct)} t`. Resubstituting gives us `y = frac t {ln(Ct)}` (note that I simplified).

3.2 -> 3.3: Systems of Linear Differential Equations

SLDEs are an extension of the linear systems of equations everyone who's taken 1554 sees in their nightmares. The idea is simple: not unlike the normal homogeneous system `vec x = A vec x` where A is a transformation matrix, we have `vec x' = A vec x`. The derivative of the vector equals some matrix times the vector. This has a number of interesting implications. The general way to solve these is by eigenvalues: for a system `vec X' = A vec X`, where `A` has real and different eigenvalues `lambda_1` and `lambda_2` with corresponding eigenvectors `vec v_1` and `vec v_2`, our general solution is `X = c_1 e^{lambda_1 t} vec v_1 + c_2 e^{lambda_2 t} vec v_2`, where `c_1` and `c_2` are arbitrary constant multipliers.

SLDEs have many useful properties: they can be rewritten as matrices, of course, but they can also be used to represent a higher-order equation as a system of lower-order equations with a nice substitution.

WS3.2.1 (a) is a pretty nice classification problem. We're asked to write the system `x' = -x + ty, y' = tx - y` in matrix form, and classify it as homogeneous and/or autonomous. To do this, we let `vec X = [x, y]`, and thus `vec X' = [x', y']`, and rewrite as `X' = [[-1, t], [t, -1]] X`. If you aren't convinced, do the multiplication - you'll get back the original system. This is clearly non-autonomous because there is a `t` term on the right side; it is also clearly homogeneous because there is no constant term.

WS3.2.2 provides a pretty good example of rewriting higher-order equations via a substitution. We're asked to write `u'' - 2u' + u = sin(t)` as an SLDE. In this case, we use the substitution `y = u'` and `x = u`, so `y' = u''` and `x' = u'`. Note also that `y = x'`. We can substitute these to get `y' - 2y + x = sin(t)`. This is first-order! It's also incomplete - we don't know anything about `x` and `y`. We want to do entirely without `u`, so we just apply the constraint `y = x'`: now it's a complete system. We can even write it in matrix form: algebra gives us `x' = y, y' = 2y - x + sin(t)`, so if we let `vec X = [x, y]`, we have `vec X' = [[0, 1], [-1, 2]] vec X + [0, sin(t)]`. Note the constant term `[0, sin(t)]` means this is nonhomogeneous, and because it contains `t`, the equation is also nonautonomous!

WS3.3.1 introduces the problem of actually solving SLDEs in matrix form. Part (a) asks us to find the general solution to `vec X' = [[1, 1], [4, -2]] vec X`: recall that the general form of the solution is `vec X = c_1 e^{lambda_1 t} vec v_1 + c_2 e^{lambda_2 t} vec v_2`. To find the eigenvalues, I prefer the characteristic polynomial method, but you can do whatever you like: they're `lambda_1 = 2` and `lambda_2 = -3`. Finding the corresponding eigenvectors is not terribly challenging either; you should get `vec v_1 = [1, 1]` and `vec v_2 = [-1, 4]`. We can just plug these in to get `vec X = c_1 e^{2t} [1, 1] + c_2 e^{-3t} [-1, 4]`. Not particularly difficult.

My general aversion to graphing means I'm not going to try to graph any phase portraits here. Keep in mind that they will probably be on the test!

3.4 -> 3.5: Complexity

Of course, things don't always work out nicely. Real and different eigenvalues are just a special case of eigenvalues in general, which can be repeated or complex (note that, for 2x2 matrices, they cannot be both).

When your eigenvalues are complex, you'll need to use Euler's identity: `e^{i theta} = cos(theta) + i sin(theta)`. Use this to reduce the solution for a single eigenvalue (you don't need to find both), and you'll be able to do algebraic munging to get something in the form `vec v + i vec r`. If you've done it correctly, `vec v` and `vec r` form a linearly independent basis for the solution - so you can rewrite as `c_1 vec v + c_2 vec r`, where `c_2` implicitly eats the `i` term. This can actually be graphed, unlike the complex version.

That might sound complicated, but it really isn't too bad. Let's do an example. WS3.4.1 asks us to find a general solution for `vec X' = [[1, 2], [-5, 1]] vec X`. The characteristic polynomial method very quickly yields `lambda = 1 - i sqrt(10)` (note that the complex conjugate of this is the other eigenvalue, but we don't need to worry about that). The eigenvalue is `vec v = [i sqrt(10), 5]`. We can substitute this in for the first solution to get `e^{t - isqrt(10)t} [i sqrt(10), 5]`. This is technically a correct solution, but it's also nasty, and we can't graph it. Let's simplify! Euler's identity quickly gets us to `e^t (cos(-sqrt(10)t) + isin(-sqrt(10)t)) [isqrt(10), 5]`. Yikes. Fortunately, we have another trick up our sleeve: distribution. `e^t [isqrt(10) cos(-sqrt(10)t) - sqrt(10)sin(-sqrt(10)t), 5cos(-sqrt(10)t) + isin(-sqrt(10)t)]`. This separates out into `e^t([-sqrt(10)sin(-sqrt(10)t), 5cos(-sqrt(10)t)] + i [sqrt(10) cos(-sqrt(10)t), sin(-sqrt(10)t)])`. The vectors are linearly independent, so we can finally turn this into a full solution: `vec X = c_1 e^t [-sqrt(10)sin(-sqrt(10)t), 5cos(-sqrt(10)t)] + c_2 e^t [sqrt(10) cos(-sqrt(10)t), sin(-sqrt(10)t)]`. A real valued solution! Note that the graph of this is a spiral growing outwards from the origin.

When eigenvalues are repeated, our story gets much worse. There is still a very easy solution if you can find two linearly independent eigenvectors, in which case the normal solution applies; however, if you cannot find two linearly independent eigenvectors, there is a trick to find a different solution: if you have the eigenvalue/vector pair `lambda, vec v`, your second solution is `te^{lambda t} vec v + e^{lambda t} vec w`, where `w` is a solution to the linear equation `(A - lambda I) vec w = v`. If this sounds complicated, that's probably because it is.

Let's do an example. WS3.5.1 asks us to solve `vec X' = [[3, -4], [1, -1]] vec X`. This has a repeated eigenvalue `lambda = 1`, and a repeated eigenvector `vec v = [2, 1]`, so our first general solution is `X = c_1 e^{t} [2, 1]`. We need another one. To find that, we solve the equation `[[2, -4], [1, -2]] vec w = [2, 1]`. This is easy to solve with Gaussian elimination to get `vec w = [3, 1]`. Thus, our second solution is the disgusting `te^t [2, 1] + e^t [3, 1]`. This means our final general solution is `X = c_1 e^t [2, 1] + c_2 (te^t [2, 1] + e^t [3, 1])`.

Final Notes

That's everything! Note that I covered all the material, but I did so incredibly briefly; be sure to read the previous weekly reviews and quiz reviews for a more detailed look.

Aside from all of the equations and identities here, you'll also need to know some basic tricks:

Good luck, and don't forget your balloon hats!