# Exact and non-exact differential equations

Studying for the mathematics subject GRE test, I have lately been going over differential equations material. The section in the Princeton Review test preparation book on the subject is a little suboptimal in my mind – it runs through things a bit too quickly, not tracing out the basic explanations about how things are working to the same degree as in other sections.

To add to this situation, when I studied differential equations back at Evergreen, the route our professor took was heavier on the qualitative side than the bag-of-tricks side. The reasoning for this was that often in the study of differential equations, students find themselves learning all of these great tricks and come to expect that those are the tools which will help them solve their problems. When they get out into the real world, they find that applying the tricks is usually very difficult, and sometime none of them apply. Other times, you find a trick that works but gives you such a mess of an answer that you may as well not have solved for it at all. Qualitative techniques are valuable because they enable you to extract information (sometimes all you need to know within whatever context is being worked on) without necessarily solving the equation, and usually, these techniques are much easier to apply. Liking this trend so much, I went on to study non-linear dynamics and not take partial differential equations (though I’d like to at some point), which the professor used to doll out the tricks.

Together, a skimpy section on diffeqs in the GRE book and a more qualitative study of differentials in school have left me with a little extra work to do on this topic. En route, some interesting stuff (new to me) came up that I thought I would sketch out here. The tidbit in question is the relationship between exact and non-exact differential equations.

The basis of exact differentials stem from the following: If you have a family of curves , they must obey the *total differential* equation . The total differential is given as

in the book. Once looking at this I could kind of see what it was doing, but the concept didn’t really make full sense to me (in that crystal clear way I like it to when I’m doing mathematics) until I started exploring the concept of the total derivative (with the help of Wikipedia). The total derivative of with respect to is given as

,

which can be obtained informally from the equation above by “dividing through by the differential “. This derivative gives us a measure of the *total* degree to which is changing with respect to when there is an implicit relationship between and . Of course, this is the case if we are assuming that . Once I figured this out, the pieces started falling into place more clearly.

Since we could differentiate with respect to either or and then “multiply though” by the corresponding differential to get the same total differential form above, it makes sense that this total differential should add up to , since both of the total derivatives do.

So, moving on, the text goes on and describes that any differential equation of the form

where

and

would naturally have the family of curves as part of its solution space. (Note also that with some continuity assumptions, and limitations associated with the range of possible starting points given by , the Uniqueness Theorem implies that these are the only solutions).

Assuming the continuity of the second partial derivatives, it also follows that we can tell if there exists such an by looking to see if (the partial derivatives). If so, then it follows that we can compute (or try to compute) the integrals

and

and adjust the constants of integration (which will be single variable functions of and respectively) so that the two resulting integrals match. That matching integral is out .

So this is all fine and dandy, and I was happy to get through this little bit, but then I started wondering about these *non-exact* differential equations. They went on to discuss equations of the form

Where we don’t have such nice and simple conditions on and . It showed how in some cases, you can come up with an *integrating factor* by which you can multiply both sides of the equation directly above and come up with an equivalent equation which *is* exact. They then went on to show a trick that works in a couple of very specific cases.

The cases in question are when either

is a function of alone or

is a function of alone. The integrating factor in the first case (and similarly in the second), if one lets , is given by

.

This seemed like a pretty cool trick, but I quickly became curious about how it works, so I decided to find out. All I would need to do (assuming the continuity of these second derivatives) is to show that .

The first derivative here was easy with the product rule – the fact that doesn’t depend on means that we can treat it like a constant and get

Computing the second derivative by the product rule, we get

And so, sure enough, we have that , as desired. A neat trick, to be sure. I’m curious to see if there is a way of deriving it from assuming that such a integrating factor exists and trying to solve for it. I may get to that at some point, but for now am satisfied that I have a better sense for why this works.

What I am even more interested in with this is the claim that if a non-exact equation has a solution, then there exists an integrating factor for it, though it may be difficult to find. I definitely want to come back and visit this all at some point but for right now need to get back to moving through the practice booklet.