Deadly Boring Math
The Mostest Bestestest FOSS Math Journal on the Internet[citation needed]

News
Multivariable Calculus Megathread
by Tyler Clarke on 2025-5-15
Differential Equations Exam 1 Review
by Tyler Clarke on 2025-6-6
Differential Equations Week 1
by Tyler Clarke on 2025-5-17
Differential Equations Week 3
by Tyler Clarke on 2025-5-31
Differential Equations Week 2
by Tyler Clarke on 2025-5-25
Differential Quiz 1
by Tyler Clarke on 2025-5-21
Differential Quiz 2
by Tyler Clarke on 2025-6-2
Physics Exam 3 Review
by Tyler Clarke on 2025-4-11
Math Gun pt. 1
by Tyler Clarke on 2025-5-2
Inductance Hell
by Tyler Clarke on 2025-4-5
Physics Final Exam Review
by Tyler Clarke on 2025-4-26
A Brief Introduction to AI Math
by Tyler Clarke on 2025-4-8

Paul's Online Math Notes
Khan Academy

Notice a mistake? Contact me at plupy44@gmail.com
How To Contribute
Rendered with Sitix
Mathematical formulas generated via MathJax.
Disclaimer (important!)

Differential Equations Week 2

By Tyler Clarke in Calculus on 2025-5-25

This post is part of a series; the next post in the series can be found here.

Hello, everyone! It's only been a few days since I last screamed into the void here, but it feels like a proper eternity. Hard to believe we're only 2 weeks in! The quiz Thursday went well! Nobody else wore funny hats, but I met someone who's been reading Deadly Boring Math (a fellow physics major, no less!), which was pretty cool. We covered a massive amount of material this week, and are now equipped to solve some very fancy and complicated problems.

Uniqueness and Intervals of Existence

Particularly useful in our study of differential equations is the ability to determine whether or not a differential equation has a solution before solving it. This allows us to skip solving an unsolvable equation, which is nice, but perhaps more importantly lets us determine where a solution might exist: the interval of existence.

This is easiest to demonstrate for a linear differential equation. Given an equation in the form `frac {dy} {dt} + p(t)y = q(t)`, the interval of existence is simply the intersection of the interval of existence of `p(t)` and that of `q(t)`. For example: given the equation `frac {dy} {dx} + frac y t = ln(t + 2)`, we have `p(t) = frac 1 t` and `q(t) = ln(t + 2)`: `p(t)` is defined for `-oo < t < 0, 0 < t < oo`, and `q(t)` is defined whenever `t + 2 > 0` or `t > -2` (the input to `ln` must always be positive), so the interval of existence of our equation is `-2 < t < 0, 0 < t < oo`.

Knowing the interval of existence also tells us where a unique solution exists, given a starting point. For instance, if we start at `t = -1` in the above example, the solution exists and is unique for `-2 < t < 0`, because that's the interval that contains `-1`; if we instead start at `t = 1`, the solution exists and is unique for `0 < t < oo`. Just pick the interval that fits. Note that this theorem says nothing about non-existence; there might be a solution for `t = 0` or `t = -3`, but we can't say predict about it without actually finding the solution.

I won't include the proof for why this works as it's complicated and not particularly germane. The textbook does include it, and if you like that sort of thing, it's worth checking out. As always, if you'd like a Deadly Boring explanation of any of the proofs in this course, shoot me an email and I'll write a post about 'em!

What if the system isn't linear? Sadly, we don't actually have a way to find a rigid interval of existence; all we can do is say that one definitely exists (note that we still can't say that one doesn't exist). To do this, get the system in the form `frac {dy} {dx} = f(t, y)` and take the derivative `frac {df} {dy}`. If `f` and `frac {df} {dy}` are both continuous over some interval in `y` that contains the initial value `y_0`, and over some other interval in `t` that contains the initial value `t_0`, then there is a unique solution to the IVP somewhere inside those two intervals. We can't say exactly what interval this is- it could be a wide variety of different intervals inside our bounds. What we can say for sure is that the interval exists and contains `t_0`, meaning it's reasonable to continue.

Let's do an example. Given the particularly nasty and very much nonlinear equation `frac {dy} {dt} = frac {t^2 - 4t + 13} {y - 1}`, is there a solution for the initial condition `y(3) = 4`?

In this case, it's quite clear that `f(t, y) = frac {t^2 - 4t + 13} {y - 1}`. The numerator here is defined for all `t`, and the rest is defined for `-oo < y < 1, 1 < y < oo`. `frac {df} {dy}` initially looks difficult, but is in fact quite simple - this is a partial derivative, so `t` is held constant. `frac {df} {dy} = -frac {t^2 - 4t + 13} {(y - 1)^2}`. This has exactly the same intervals. Hence, we can guarantee a solution including `y(3) = 4`. Note that we can't say for sure that `y(a) = 1` doesn't have a solution, but we can say for sure that it does not have a unique solution. If we wanted to know the exact interval of existence, we'd have to actually solve the equation.

Brief Aside: Some Distinctions

Linear ODEs are different from nonlinear ones for a wide variety of reasons. I won't get into all of them in detail, but here's a brief overview:

Note: I have skipped all of section 2.5 as it's essentially a rehash of autonomous equations with a bit of modeling thrown in. We'll probably revisit it later in a quiz or exam review; for now, I'd recommend closely watching the lecture on 2.5 and reading the textbook section.

Exact Equations

Exact equations are a particularly fascinating class of equation that can be solved trivially with a substitution. The core idea is to find a function `w(x, y)` with partial derivatives `frac {dw} {dx}` and `frac {dw} {dy}` such that you can rewrite your differential equation as `frac {dw} {dx} + frac {dw} {dy} frac {dy} {dx} = 0`: this is obviously a sum of partial derivatives that actually just equals `frac {dw} {dx}`. Integrating this will simply yield `w = c`, which you can then substitute.

It sounds confusing, but is really quite simple. For example, given the equation `2x + y^2 + 2xy frac {dy} {dx} = 0`, if we can find a function `w` for which `frac {dw} {dx} = 2x + y^2` and `frac {dw} {dy} = 2xy`, we can rewrite as `frac {dw} {dx} + frac {dw} {dy} frac {dy} {dx} = frac {dw} {dx} = 0` and integrate to get `w = c`. Does such a function exist? If you took multivariable recently, I can only imagine you share my current evil grin. This is just a potential-function problem! We know how to solve this!

The first step is to find the integral `int frac {dw} {dx} dx`. This is trivially found to be `w(x, y) = x^2 + xy^2 + C(y)`. We need to find what `C(y)` is. We have a convenient `frac {dw} {dy}` hanging around that we can do algebra on, so let's differentiate with respect to `y` and compare: `frac {dw} {dy} = 2xy = frac {d} {dy} (x^2 + xy^2 + C(y)) = 2xy + C'(y)`. Doing some algebra tells us that `C'(y) = 0`, so `C(y) = C`. Substitute into the above equation to get `w(x, y) = x^2 + xy^2 + C`.

The partial derivatives of this are exactly what we need (you can check if you don't believe me; I'll wait). Following the substitution procedure outlined above gives us `x^2 + xy^2 = C` - the explicit form of which is `y = sqrt(frac {C - x^2} {x})` for `x != 0, x < C`.

Exact equations are obviously quite powerful. If we can turn them into potential-function problems, we can find an analytical solution with very little effort! Let's do a slightly harder problem (straight from the textbook). We need to find an exact solution to the differential equation `ycos(x) + 2xe^y + (sin(x) + x^2e^y - 1)frac {dy} {dx} = 0`.

It's immediately obvious that this is in the necessary form `frac {dw} {dx} + frac {dw} {dy} frac {dy} {dx} = 0`. We have `frac {dw} {dx} = ycos(x) + 2xe^y` and `frac {dw} {dy} = sin(x) + x^2e^y - 1`. Integrating `frac {dw} {dx}` in terms of `x` gives us `w = ysin(x) + x^2e^y + C(y)`, and taking the derivative in terms of `y` yields simply `frac {dw} {dy} = sin(x) + x^2e^y + C'(y)`. We can thus construct the equation `sin(x) + x^2e^y + C'(y) = sin(x) + x^2e^y - 1`. It is immediately obvious that `C'(y) = -1`, so `C(y) = -y + C`. Substitution gives us `w = ysin(x) + x^2e^y - y + C` - this produces the correct partial derivatives.

The last step is to simply substitute into the old equation, to get `ysin(x) + x^2e^y - y = C`. This is the correct solution! Using some trickery from multivariable, we were able to solve a fiendishly difficult differential equation almost effortlessly.

It's obviously a good idea to use exact equations whenever possible. And the good news is- some inexact equations can actually be turned into exact equations with a bit of algebra! The goal is to find some function `mu(x, y)`, an integrating factor, that can be multiplied into the equation to make it exact (if this sounds familiar, it's because we do this every time we solve a linear differential equation).

Exact equations can only be solved via potential function magic if `frac {d} {dy} frac {dw} {dx} = frac {d} {dx} frac {dw} {dy}`. Hence, we're looking for a `mu` where, given an equation `M(x, y) + N(x, y) frac {dy} {dx} = 0`, we get a `frac {d} {dy} mu M = frac {d} {dx} mu N` which is simpler to solve. We can use the power rule to turn this into the equation `M frac {d mu} {dy} + mu frac {dM} {dy} = N frac {d mu} {dx} + mu frac {dN} {dx}`, which can be rearranged to get `M frac {d mu} {dy} - N frac {d mu} {dx} + mu (frac {dM} {dy} - frac {dN} {dx}) = 0`. Unfortunately, this is usually fairly difficult to solve, except in some special cases where `mu` depends only on one variable. In those cases, we immediately know that the derivative of `mu` with respect to the other variable is always 0, which greatly simplifies the equation.

How can we find whether or not `mu` depends on a single variable? Clever substitution. If `mu` depends only on `x`, then `frac {d} {dy} (mu M) = mu frac {dM} {dy}` (don't believe me? Try doing the power rule yourself - you'll get the same result!) and `frac {d} {dx} (mu N) = mu frac {dN} {dx} + N frac {d mu} {dx}`. This is where the math gets really strange: we know that, for this to be valid, `frac {d} {dy} (mu M) = frac {d} {dx} (mu N)`, so we have an equation: `frac {d} {dy} (mu M) = mu frac {dN} {dx} + N frac {d mu} {dx}`. Knowing what we know about the left side, we get `mu frac {dM} {dy} = mu frac {dN} {dx} + N frac {d mu} {dx}`. This can be further boiled down to a differential equation `frac {d mu} {dx} = mu frac {frac {dM} {dy} - frac {dN} {dx}} {N}`. If the coefficient of `mu` on the right hand side is a function in `x` only, then this is a very simple separable ODE, and we have our `mu`!

Let's do an example. Straight from the textbook, we're given an equation `3xy + y^2 + (x^2 + xy)y' = 0`. This cannot be solved directly; we need to find an integrating factor. It's easy enough to read off that `M = 3xy + y^2` and `N = x^2 + xy`, which we can substitute into the above equation to get `frac {d mu} {dx} = mu frac {3x + 2y - 2x - y} {x^2 + xy}`. Simplify to get `frac {d mu} {dx} = mu frac {x + y} {x^2 + xy} = mu frac {x + y} {x(x + y)} = mu frac 1 x`. Our coefficient is in terms of `x` only - it's separable!

We can separate out to get `frac 1 { mu } d mu = frac 1 { x } dx`. Integrating this yields `ln|mu| = ln|x|` - `mu(x) = x`. Nice! I'm not going to do that here, but if you're interested in some practice, you can multiply both sides of the equation by `mu` to get an exact equation that can be easily solved.

Substitution

There is a whole family of differential solving techniques hinging on the ability to turn a difficult equation into something simple. These are called substitution methods. The general idea is that, given a differential equation in `y(x)`, you can come up with some function `v(y, x)` for which a substitution of `y` in terms of `v` is simpler to solve than the original equation. Generally speaking, finding substitutions is hard; however, we have some known substitutions for specific situations that can make things much simpler.

The first and simplest type of substitution is the homogeneous substitution. Homogeneous functions are identified by the identity `f(lambda x, lambda y) = lambda^k f(x, y)`: in simple terms, a function is homogeneous if multiplying the arguments by some constant `lambda` has the same result as multiplying the function by the `k`th power of `lambda`. `k` here is the degree. A homogenous differential equation is simply a differential equation in the form `M + N frac {dy} {dx} = 0`, where M and N are both homogeneous functions with the same `k`.

In the case of a homogeneous differential equation, we can simplify it with the substitution `u = frac y x`, `y = ux`. to get something variable-separable. I won't prove why this works here; however, it's really quite fascinating, and I highly recommend you read the textbook or Paul's notes on the subject. Let's do an example. We're given an equation `frac {dy} {dx} = frac {x^2 - xy + y^2} {xy}`, and we need to solve it as usual. This is not by itself solvable with any of the methods we've already learned. Let's make it homogeneous! Rewriting as `- frac {x^2 - xy + y^2} {xy} + frac {dy} {dx} = 0` obviously yields a homogeneous differential equation with `M = - frac {x^2 - xy + y^2} {xy}`, and `N = 1`. How do we know this is homogeneous? Substituting in `lambda x` and `lambda y` give us `N = lambda^0 1` and `M = - frac {lambda^2 x^2 - lambda^2 xy + lambda^2 y^2} {lambda^2 xy} = - lambda^0 frac {x^2 - xy + y^2} {xy}`, so both M and N are homogeneous functions with the same degree `k = 0`.

Using the substitution `y = ux`, and by power rule `frac {dy} {dx} = u + x frac {du} {dx}`, this equation becomes `- frac {x^2 - ux^2 + u^2x^2} {ux^2} + u + x frac {du} {dx} = 0`. This simplifies to `- frac {1 - u + u^2} {u} + u + x frac {du} {dx} = 0`. One more step: `- frac {1 - u} {u} + x frac {du} {dx} = 0`. Start to look familiar? That's right, this is separable. Some algebra gives us `frac 1 x dx = frac {u} {1 - u} du`. Integrate to get `ln|x| + c = -u - ln|1 - u|`. Now we can resubstitute `u = frac y x`, yielding `-frac y x - ln|1 - frac y x| = ln|x| + c`. We can make this much prettier with a bit of algebra: `frac y x + ln|x - y| = c`.

Another special case of substitution is a Bernoulli equation. This is a differential equation in the form `frac {dy} {dt} + q(t)y = r(t)y^n`, where `n` is any real number. In these cases, you first divide the entire equation by `y^n`, yielding `frac 1 {y^n} frac {dy} {dt} + frac 1 {y^{n-1}} q(t) = r(t)`, then use the substitution `u = y^(1 - n)` to solve. Once again, the proof here is fascinating, and once again, I'm not going to go through it; read the textbook.

Let's do an example. Given a differential equation `frac {dy} {dt} + y = y^3`, solve for `y`. This is Bernoulli `n=3`, so our first step is the division: `frac 1 {y^3} frac {dy} {dt} + frac {1} {y^2} = 1`. The substitution `u = y^(1 - n)` gives us `u = y^{-2}`, and `frac {du} {dt} = -2y^{-3} frac {dy} {dt}` (why is `frac {dy} {dt}` here? It's because we find `frac {du} {dt}` with chain rule). We substitute this into the equation to get `- frac 1 2 frac {du} {dt} + u = 1`, which can be algebra'd to get `frac {du} {dt} = 2(u - 1)`. A-ha! Separable! The result: `ln|u - 1| = 2t + C`. Raise `e` and substitute back in, to get `y^{-2} = Ce^{2t} + 1`. Finally, do some algebra to get `y = sqrt(frac 1 {Ce^{2t} + 1})`. Not too hard!

Some Preview: Systems of Linear Differential Equations

Next week we'll be diving into systems of linear differential equations (we already covered some of them in Friday's lecture). Just like systems of linear equations in algebra, systems of linear differential equations consist of several different interdependent differential equations: e.g., you can't solve one without knowing the solutions to the others. These systems of linear ODEs can be autonomous, just like any other ODE; they can be graphed in a wide variety of ways, and they can be exponentially solved.

One of the most useful properties of SLDEs (Systems of Linear Differential Equations) is that they can be written in a matrix form. Generally, given a system `frac {dx} {dt} = a_1x + b_1y + c_1`, `frac {dy} {dt} = a_2x + b_2y + c_2`, you can rewrite as `[x', y'] = [[a_1, b_1], [a_2, b_2]] cdot [x, y] + [c_1, c_2]`. This is important! In the case where `c = [c_1, c_2] = 0`, this system is considered to be homogeneous, and you can trivially solve it simply by finding the eigenvalues and eigenvectors of the constant matrix. Given two eigenvalues `lambda_1`, `lambda_2` with corresponding eigenvectors `hat v_1`, `hat v_2`, the general solution is simply the linear combination `[x, y] = d_1 e^{lambda_1 t} hat v_1 + d_2 e^{lambda_2 t} hat v_2`.

Let's do a quick example. Given the equations `frac {dx} {dt} = 3x + 4y` and `frac {dy} {dt} = -7x - y`, we can write the matrix form `hat x' = [[-3, 1], [0, -1]] hat x`, where `hat x = [x, y]`. The eigenvalues of this are easily found to be -3 and -1, with corresponding eigenvectors `[1, 0]` and `[1, 2]`. This means the general solution is the linear combination `[x, y] = [1, 0] e^{-3t} + [1, 2] e^{-t}`.

SLDEs are fascinating, and we have quite a lot more ground to cover. I'll leave that for next week.

Final Notes

The first two homeworks are due tomorrow. If you haven't done them, make sure to ASAP! They're not very hard. This week is going to be pretty quiet, but next Thursday (June 3rd) we've a quiz in studio, and then the monday immediately following is our first midterm. Watch this space for review material pertaining to both!

This week has been proof-heavy. I usually omit the more complicated proofs (see: all of them) to keep these posts concise rather than thorough, but if anyone would like that to change, shoot me an email!

I think that's everything. See you next weekend!