Deadly Boring Math
The Mostest Bestestest FOSS Math Journal on the Internet[citation needed]
Paul's Online Math Notes
Khan Academy
Notice a mistake? Contact me at plupy44@gmail.com
How To Contribute
Rendered with Sitix
Mathematical formulas generated via MathJax.
Disclaimer (important!)
By Tyler Clarke in Calculus on 2025-5-31
This post is part of a series; you can read the next post here.
Hello once again, dear readers! It's been yet another big week in summer diffy q (although thankfully a little less big than the preceding - Memorial Day cut out a whole lecture). We've been primarily concerned with solving SLDEs (Systems of Linear Differential Equations), and we've a whole bunch of ways to do that. Spoiler alert: it's going to require Euler's formula `e^{i theta} = cos(theta) + isin(theta)`, a fundamental identity you'll have to memorize if you don't want to get a tattoo. We also have the second quiz on Tuesday next week - I'll get a review out tomorrow or Monday. It shouldn't be too hard; just make sure you're familiar with graphing techniques.
SLDEs are a fairly simple idea at face. Rather like a system of linear equations, we have several (arbitrarily many!) linearly independent differential equations in terms of some variables (`x`, `y`, `z`, `f`, it doesn't matter) in terms of the same variable `t`. The ones we've dealt with so far are mostly two-variable, so they look rather like `frac {dx} {dt} = 3x - y, frac {dy} {dt} = x + y`.
This can be written very easily in matrix form. To understand how, consider that we can write the above as `[frac {dx} {dt}, frac {dy} {dt}] = [[3, -1], [1, 1]] [x, y]` (not sure how I did this? You should probably review some linear algebra - specifically matrix-vector multiplication). Now, if we say that `X = [x, y]`, and thus `frac {dX} {dt} = [frac {dx} {dt}, frac {dy} {dt}]`, this can be rewritten as `X'(t) = [[3, -1], [1, 1]] X`. A matrix equation! Solving this is more complicated than a typical linear algebra problem. We'll get to that in a bit.
What if our linear system has constants, like `frac {dx} {dt} = 3x - y + 5, frac {dy} {dt} = x + y - pi`? This is quite simple: we just add a constant vector. This equation becomes `X'(t) = [[3, -1], [1, 1]] X + [5, -pi]`. In this case, the SLDE is no longer homogeneous - the constants aren't 0, so this is harder to solve.
SLDEs also have a concept of autonomousity - it just means `t` only appears as something over `dt`. It's very possible to write an SLDE that isn't autonomous: `frac {dx} {dt} = x + 3ty, frac {dy} {dt} = tx`. The matrix equation form here is obviously `X'(t) = [[1, 3t], [t, 0]] X`. Homogeneous, but nonautonomous.
Solving SLDEs is fairly simple and procedural - you don't need to think too much about it, just Follow The Process, which I promise I'll get to soon. Solving higher-degree differential equations is decidedly not. Fortunately, we have a neat substitution trick to rewrite some second-degree ODEs as an SLDE. This one is much easier to explain by example. Given a second-degree linear ODE `u'' - 7u' + tu = 3`, we,
Ta-da! Note that this is fairly delicate: for instance, if we leave out that `x' = y`, we're not going to go anywhere useful, and if we used `7x'` instead of `7y`, we'd have ended up unable to turn this into a matrix. To get a proper and useful SLDE, you'll often need to think seriously about what choices for substitutions and eliminations you make.
That section title chills me to the bone. The simplest way to solve a homogeneous and autonomous SLDE is with the eigenvalues and eigenvectors: for a 2x2 problem like the ones we've dealt with so far in this class, if we can isolate real eigenvalues `lambda_1` and `lambda_2` with corresponding eigenvectors `v_1` and `v_2`. The astute linear algebra nerd will note that there are infinitely many eigenvectors for a given eigenvalue - we're going to have some arbitrary constant multiples, so it doesn't matter where in the eigenbasis you pick your vectors. I would recommend you prioritize small whole numbers. The general solution in terms of arbitrary constants `c_1` and `c_2` is `c_1 e^{lambda_1 t} v_1 + c_2 e^{lambda_2 t} v_2`. I'm not going to go into the proof of this; it's elementary enough that we can just accept it to be True.
Note that this is only true in the case that the matrix has two real and distinct eigenvalues. Matrices with repeated eigenvalues, or complex eigenvalues, require a bit more work. Let's do an example. Take the system, `X'(t) = [[5, -1], [0, 1]] X`: I'll save you the linear algebra; if you aren't sure how I got these values, make sure to work through the problem yourself! We have eigenvalues `lambda_1 = 1` and `lambda_2 = 5`, and corresponding eigenvectors `v_1 = [1, 4]` and `v_2 = [1, 0]`. We can actually directly substitute to get `X(t) = c_1 e^t [1, 4] + c_2 e^{5t} [1, 0]`. Pretty easy!
How do we pick `c_1` and `c_2`? The simple answer is that we don't: every possible `(c_1, c_2)` is a different solution. If we have an IVP, we can solve as always, but for now it's safe to just leave them alone.
As much as I loathe graphing, the graph behavior here is actually quite important to talk about. Rather like a phase line in 1d, we can construct a phase portrait in 2d - one axis per dependent variable. Essentially, we plot a bunch of the curves that our solution will follow, and mark the direction of the derivative along each curve with arrows at some key locations, just like with phase lines. To pick these curves, we just grab some random values for `(c_1, c_2)` - for instance, `(1, 0)`, `(0, 1)`, etc.
In the homogeneous case, every curve either converges to or diverges away from (or some combination thereof) the origin. We can actually make predictions about this without graphing it! For instance, we know that if the eigenvalues are both positive, the curves will always flow away, because the `e^{ lambda t }` terms will always increase the solution away from the origin. This is unstable, or a nodal source. Similarly, if both eigenvalues are negative, the `e^{ lambda t }` terms will approach 0, driving every solution towards the origin - thus, the origin is asymptotically stable, or a nodal sink. If one eigenvector is negative and the other is positive, this is a saddle point: you might remember that term from multivariable; it means solution curves flow towards it along one axis, and away from it along another. In our case, both eigenvalues are positive and the system is homogeneous, so the origin is unstable - a nodal source.
When hand-drawing phase portraits, as with pretty much any manual graphing in calculus, your goal is to convey properties of the system rather than to actually provide precise values. I'm not going to do any graphics for these because, frankly, I don't want to, but I recommend you try drawing the phase portrait for the problem above. Hint: for very small `t`, solution curves follow `v_1` much more than `v_2`, but as `t` passes `1`, `v_2` quickly takes over. It's useful to note that each eigenbasis is a line in `R^2`, words I never thought I'd utter again after linear algebra, so these are the first things we graph when building a phase portrait of this system.
There are many possible singular matrices where one eigenvalue is 0. Fortunately, these behave exactly as you'd expect: the `e^{lambda t}` term for `lambda = 0` simplifies to just `1` - which means the eigenvector component for `lambda` is a constant solution.
Real eigenvalues are really just a special case of complex eigenvalues. How can we handle the complex case? For example, if we have the equation `X' = [[-1, -1], [2, 1]] X`, our eigenvalues are `lambda_1 = -i` and `lambda_2 = i`, with corresponding eigenvectors `v_1 = [1, -1 - i]` and `v_2 = [1, -1 + i]` (remember that you only need to find the first one - the second eigenvalue is the complex conjugate of the first, as is the second eigenvector). We only need one eigenvalue/eigenvector pair, for reasons I'll explain in a bit, so I'll pick `lambda_2` and `v_2`. Substituting into the formula gives `X = c e^{it} [1, i - 1]`. Yeck.
At this point, the clever reader will have looked down at the formula carved into their hand and chuckled slightly. `e^{it}` is pretty close to `e^{i theta}` - we know how to simplify this! Euler's identity, `e^{i theta} = cos(theta) + i sin(theta)` (this formula won't be provided on quizzes/tests) substitutes to give us `X = c [cos(t) + isin(t), icos(t) - sin(t) - cos(t) - isin(t)]`. That's... not much better. We have a few more tricks in this bag, though. The next step is to separate into something that is a multiple of `i` and something that isn't: `X = c [cos(t), - sin(t) - cos(t)] + ci[sin(t), cos(t) - sin(t)]`
There's still a pesky `i` term here. Do you see how we can get rid of it? We're in the very lucky position of having an arbitrary constant, and we can do some weird stuff with it: for instance, we can split it into two arbitrary constants, one of which is a multiple of `i`. `X = c_1 [cos(t), - sin(t) - cos(t)] + c_2 [sin(t), cos(t) - sin(t)]`. This is actually a general solution! The Wronskian (determinant) of those two vectors is nonzero (you can calculate this yourself, if you want to, but I'm not going to), meaning they are linearly independent - and because this is a problem in `R^2`, any set of two linearly independent solutions is a complete basis. We could compute the solution for the first eigenvalue, but that would be unpleasant and unnecessary: it's not possible to have more than two linearly independent solutions here, so we'd end up just doing a bunch of painful algebra to get an equivalent result.
What if the real part isn't zero? As it turns out, this isn't really all that much harder. The multiple of your eigenvector can be written as `Ce^{alpha t + i beta t}` - you can't use Euler's identity for this, but you can separate it out to `Ce^{alpha t}e^{i beta t}`, which can be Euler'd to get `Ce^{ alpha t } (cos(beta t) + i sin(beta t))`. The `e^{ alpha t }` will ride along in the algebra.
Why not just leave the complex in rather than going to all this effort? This is mainly for "I-told-you-so" reasons; separating and solving makes a nicer result. There is, however, one added benefit: the equation with `i` is hellish to graph, but with `i` magicked out, it becomes much simpler. The graphs with complex eigenvalues tend to be spirals. If the real part is 0, you get an oval or a circle, and the origin is considered to be a stable or center node - very different from being asymptotically stable, because the solution curves don't actually approach it! If the real part is positive, you get an unstable spiral source - the solution curves all flow away from origin; and if the real part is negative, you get an asymptotically stable spiral sink.
There exist situations where an eigenvalue is repeated. In these cases, you may still be able to find two linearly independent eigenvectors, in which case you just solve as usual; situations where you only have one eigenvalue and one eigenvector are where we have to significantly diverge from the usual process. When that happens, we actually throw away the matrices entirely and use a combination of the variable-separable method and normal linear ODEs.
Let's do an example. Given the equation `X' = [[-1, 2], [0, -1]] X`, the only eigenvalue is `-1`, and the only eigenvector is `[1, 0]`. We can't solve this the conventional way. The easiest way to solve this is actually to break it apart: we have two equations, `frac {dx} {dt} = 2y - x, frac {dy} {dt} = -y`. That second equation is variable-separable! We can turn it into `frac 1 y dy = -1 dt`, which integrates to `ln|y| = -t + C`, which solves to `y = C_1e^{-t}`.
Now we substitute our known value for `y` into the first equation: `frac {dx} {dt} = C_1e^{-t} - x`. Does this look familiar? It can be rewritten as `frac {dx} {dt} + x = C_1e^{-t}`. This is linear with `p(t) = 1` and `q(t) = C_1e^{-t}`! `mu = e^{int p(t) dt} = C_2e^t`, so `C_2e^{t}x = int C_2 C_1 e^{t} e^t dt`. This simplifies to `e^{t}x = int C_1 dt`, and integrates to `e^{t}x = C_1t + C_3`. Finally, `x = C_1 t e^{-t} + C_3e^{-t}`. We can munge this into a solution: `X = C_1e^{-t} [t, 1] + C_3e^{-t} [1, 0]`. Note that this actually includes the solution we would have gotten from the eigenvector method, `Ce^{-t} [1, 0]`. Beware: this method only works when either the first row of the matrix is `[1, 0]` or the second row is `[0, 1]`!
This one was pretty quick! Probably because this week only covered about two-thirds of the material we normally go through. Watch this space for a quiz 2 review; in the meantime, make sure to do all the worksheets and homeworks! We have our first round of serious homework due on Monday; it shouldn't be too bad, but do be careful not to leave it till the last minute. 'Till next time, auf wiedersehen, and have a good weekend!