Skip to main content

Section 5.3 More Nonlinear Mechanics

The equation for the nonlinear pendulum with damping is

\begin{equation*} \frac{d^2 \theta}{dt^2} + \frac{b}{m} \frac{d \theta}{dt} + \frac{g}{L} \sin \theta = 0. \end{equation*}

As a system, we can model the pendulum as

\begin{align} \frac{d \theta}{dt} & = v\tag{5.3.1}\\ \frac{dv}{dt} & = - \frac{b}{m} v - \frac{g}{L} \sin \theta.\tag{5.3.2} \end{align}

This system is not Hamiltonian since

\begin{equation*} \frac{\partial v}{\partial \theta} = 0, \end{equation*}

but

\begin{equation*} -\frac{\partial }{\partial v} \left( - \frac{b}{m} v - \frac{g}{L} \sin \theta \right) = \frac{b}{m} \neq 0. \end{equation*}

We shall have to invent new techniques to analyze such systems.

Subsection 5.3.1 The Nonlinear Pendulum and Damping

To find the nullclines of the nonlinear pendulum, let

\begin{align*} \frac{d \theta}{dt} & = v = 0\\ \frac{dv}{dt} & = - \frac{b}{m} v - \frac{g}{L} \sin \theta = 0. \end{align*}

The \(\theta\)-nullcline is \(v = 0\text{,}\) while the \(v\)-nullcline is

\begin{equation*} v = -\frac{mg}{bL} \sin \theta. \end{equation*}

It follows that the equilibrium solutions for the nonlinear pendulum occur at

\begin{equation*} (\theta, v) = (0,0), (\pm \pi, 0), (\pm 2\pi, 0), \ldots. \end{equation*}

This makes sense since the pendulum should not move if the bob is initially hanging downward (\(\theta = 2 \pi n\)) or is at the very top or the very bottom of a swing (\(\theta = (2n + 1)\pi\)). Since our first goal is to determine the nature of each equilibrium solution, we will compute the Jacobian of the system (5.3.1)–(5.3.2). This is just

\begin{equation*} \begin{pmatrix} 0 & 1 \\ -(g/L) \cos \theta & - b/m \end{pmatrix}. \end{equation*}

At the equilibrium solutions \((\theta, v) = (0,0), (\pm 2\pi, 0), (\pm 4\pi, 0), \ldots\text{,}\) the pendulum is hanging downward, and the Jacobian matrix becomes

\begin{equation*} J_1 = \begin{pmatrix} 0 & 1 \\ -g/L & - b/m \end{pmatrix}. \end{equation*}

On the other hand, if \((\theta, v) = (\pm \pi, 0), (\pm 3\pi, 0), (\pm 5\pi, 0), \ldots\text{,}\) the pendulum is at the top of its swing, and the Jacobian matrix is

\begin{equation*} J_2 = \begin{pmatrix} 0 & 1 \\ g/L & - b/m \end{pmatrix}. \end{equation*}

We will first examine the case where the pendulum is hanging downward. The characteristic polynomial of \(J_1\) is

\begin{equation*} \lambda^2 + \frac{b}{m} \lambda + \frac{g}{L}. \end{equation*}

Thus, the eigenvalues of \(J_1\) are

\begin{equation} \lambda = -\frac{b}{2m} \pm \sqrt{\left( \frac{b}{2m} \right)^2 - \frac{g}{L} }.\tag{5.3.3} \end{equation}

We can analyze the nature of these eigenvalues by examining the sign of

\begin{equation*} \Delta = \left( \frac{b}{2m} \right)^2 - \frac{g}{L}. \end{equation*}
  • If \(\Delta \lt 0\text{,}\) the eigenvalues of the Jacobian are complex. Since the real part of (5.3.3) is negative, these equilibrium solutions are spiral sinks.

  • If \(\Delta \gt 0\text{,}\) we have two distinct real eigenvalues. Since

    \begin{equation*} \left( \frac{b}{2m} \right)^2 - \frac{g}{L} \lt \left( \frac{b}{2m} \right)^2, \end{equation*}
    we know that
    \begin{equation*} \left( \frac{b}{2m} \right) \gt \sqrt{\left( \frac{b}{2m} \right)^2 - \frac{g}{L} }. \end{equation*}
    Thus, both of our eigenvalues must be negative. Therefore, we have a nodal sink.

  • If \(\Delta = 0\text{,}\) we have a single real negative eigenvalue. Thus, we also have a sink.

Consequently, if we assume that \(b\) is small, then \(\Delta \lt 0\) and we will only have spiral sinks (Figure 5.3.1).

solution curves for the damped pendulum on a direction field of slope arrows
Figure 5.3.1. Nullclines of the damped pendulum

Now let us consider the type of equilibrium solutions that we will obtain when the pendulum is standing upright. These solutions will occur at \((\theta, v) = (\pm \pi, 0), (\pm 3\pi, 0), (\pm 5\pi, 0), \ldots\text{.}\) The characteristic polynomial of the Jacobian matrix \(J_2\) at these points is

\begin{equation*} \lambda^2 + \frac{b}{m} \lambda - \frac{g}{L}; \end{equation*}

hence, the eigenvalues of \(J_2\) are

\begin{equation} \lambda = -\frac{b}{2m} \pm \sqrt{\left( \frac{b}{2m} \right)^2 + \frac{g}{L} }.\tag{5.3.4} \end{equation}

Furthermore, we have distinct real eigenvalues since

\begin{equation*} \left( \frac{b}{2m} \right)^2 + \frac{g}{L} \gt 0. \end{equation*}

In fact, the eigenvalues will have opposite sign

\begin{align*} \lambda_1 & = -\frac{b}{2m} + \sqrt{\left( \frac{b}{2m} \right)^2 + \frac{g}{L} } \gt 0\\ \lambda_2 & = -\frac{b}{2m} - \sqrt{\left( \frac{b}{2m} \right)^2 + \frac{g}{L} } \lt 0. \end{align*}

Thus, the equilibrium solutions are saddles.

Subsection 5.3.2 Lyapunov Functions

The function

\begin{equation*} H(\theta, v) = \frac{1}{2} v^2 - \frac{g}{L} \cos \theta \end{equation*}

is a Hamiltonian function for the ideal pendulum

\begin{align*} \frac{d \theta}{dt} & = v\\ \frac{dv}{dt} & = - \frac{g}{L} \sin \theta, \end{align*}

since \(d H/ dt = 0\text{.}\) However, if \(b \gt 0\text{,}\) then our system

\begin{align*} \frac{d \theta}{dt} & = v\\ \frac{dv}{dt} & = - \frac{b}{m} v - \frac{g}{L} \sin \theta. \end{align*}

has damping and

\begin{align*} \frac{dH}{dt} & =\frac{\partial H}{\partial \theta} \frac{d \theta}{dt} + \frac{\partial H}{\partial v} \frac{dv}{dt}\\ & = \left( \frac{g}{L} \sin \theta \right) v + v\left( -\frac{b}{m} v - \frac{g}{L} \sin \theta \right)\\ & = - \frac{b}{m} v^2 \leq 0. \end{align*}

Thus, \(H\) is decreasing whenever \(v \neq 0\text{.}\) Hence, solution curves in the \(\theta v\)-plane cross the level sets of \(H\) moving from larger to smaller \(H\) values.

We can now devise a strategy for sketching the phase plane of the damped pendulum. If \(b/m\) and \(v\) are both small, the value of \(H\) decreases slowly along the solutions (Figure 5.3.1).

The function \(H\) in the case of the damped pendulum is an example of a Lyapunov function. Specifically, a function \(L(x, y)\) is called a Lyapunov function for the system

\begin{align*} \frac{dx}{dt} & = f(x, y)\\ \frac{dy}{dt} & = g(x, y) \end{align*}

if for every solution of the system, \((x(t), y(t))\text{,}\) that is not an equilibrium solution of the system,

\begin{equation*} \frac{dL}{dt} \leq 0, \end{equation*}

with strict inequality except possible for a discrete set of \(t\)s.

As an example, let us return to the damped harmonic oscillator

\begin{align*} \frac{dy}{dt} & = v\\ \frac{dv}{dt} & = - qy - pv. \end{align*}

If \(p = 0\text{,}\) then

\begin{equation*} H(y, v) = \frac{1}{2} v^2 + \frac{q}{2} y^2 \end{equation*}

is a Hamiltonian function for our system. Recall that we also call \(H\) the energy function of the system. However, if \(p \gt 0\) and \((y(t), v(t))\) is a solution for our system, we have

\begin{align*} \frac{dH}{dt}(y, v) & = \frac{\partial H}{\partial y} \frac{dy}{dt} + \frac{\partial H}{\partial v} \frac{dy}{dt}\\ & = (qy)v + v(-qy - pv)\\ & = -pv^2 \leq 0. \end{align*}

Consequently, \(H(y(t), v(t))\) decreases at a nonzero rate (except when \(v = 0\)), and \(H\) is a Lyapunov function. The level sets of \(H\) are ellipses in the \(yv\)-plane. As \(H\) decreases, the energy dissipates and the ellipses become spiral sinks.

It is easy to see that the system

\begin{align} \frac{dx}{dt} & = y + \alpha x(x^2 + y^2)\tag{5.3.5}\\ \frac{dy}{dt} & = -x + \alpha y(x^2 + y^2)\tag{5.3.6} \end{align}

has an equilibrium solution at the origin no matter what the value of \(\alpha\) is. The Jacobian of this system is

\begin{equation*} J = \begin{pmatrix} 3 \alpha x^2 + \alpha y^2 & 1 + 2 \alpha xy \\ -1 + 2 \alpha x y & \alpha x^2 + 3 \alpha y^2 \end{pmatrix}. \end{equation*}

Since our equilibrium solution is the origin,

\begin{equation*} J = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \end{equation*}

and the linearization of our system at the origin is

\begin{align*} \frac{dx}{dt} & = y\\ \frac{dy}{dt} & = -x. \end{align*}

Since the eigenvalues of

\begin{equation*} J = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}, \end{equation*}

are \(\pm i\text{,}\) the linearization has a center at the origin. The phase plane consists of circles about the origin (Figure 5.3.2). Notice that the linearization does not depend on \(\alpha\text{.}\)

a closed circular solution curve on a direction field of slope arrows
Figure 5.3.2. Solution curves for the linearization.

Now let us consider what happens to system (5.3.5)–(5.3.6) if we consider different values of \(\alpha\text{.}\) If \(\alpha = 5\text{,}\) the situation is quite different than the linearization of our system. A solution curve spirals out from the origin as \(t \to \infty\) (Figure 5.3.3). As \(t \to - \infty\text{,}\) the solution curve spirals back into the origin, but it seems to stop before actually reaching the origin. If \(\alpha = -5\) on the other hand, we seem to have the opposite behavior with the solution curves spiraling into the origin as \(t \to \infty\text{.}\) As before, the solutions do not seem to reach the origin (Figure 5.3.4).

a solution curve with a downward trajectory on a direction field of slope arrows
Figure 5.3.3. Solution curves \(\alpha = 5\text{.}\)
a solution curve with a upward trajectory on a direction field of slope arrows
Figure 5.3.4. Solution curves for \(\alpha = -5\text{.}\)

Suppose that \((x(t), y(t))\) is a solution to the nonlinear system. The function

\begin{equation*} r(t) = \sqrt{x(t)^2 + y(t)^2} \end{equation*}

is the distance of a point on the solution curve to the origin in the \(xy\)-plane. To see how \(r\) changes as \(t \to \pm \infty\text{,}\) we can compute the derivative of \(r\text{.}\) Actually, it is easier to work with the equation \(r(t)^2 = x(t)^2 + y(t)^2\text{.}\) Thus,

\begin{align*} \frac{d}{dt} r^2 & = \frac{d}{dt} (x(t)^2 + y(t)^2)\\ & = 2 x \frac{dx}{dt} + 2y \frac{dy}{dt}\\ & = 2x (y + \alpha x(x^2 + y^2) ) + 2y( -x + \alpha y(x^2 + y^2))\\ & = 2 \alpha (x^2 + y^2)^2\\ & = 2 \alpha r^4. \end{align*}

Since \((r^2)' = 2r r'\text{,}\) we have

\begin{equation} r' = \alpha r^3\tag{5.3.7} \end{equation}

for \(r \neq 0\text{.}\)

Equation (5.3.7) is separable, and it is quite easy to determine the solution as

\begin{equation*} r(t) = \frac{1}{\sqrt{C - 2 \alpha t}}. \end{equation*}

However, we do not need to know this solution to determine the nature of the equilibrium solution at the origin. If \(\alpha \gt 0\) and \(t \to -\infty\text{,}\) equation (5.3.7) tells us that \(r(t) \to 0\text{.}\) Thus, any solution to the system (5.3.5)–(5.3.6) we have a spiral sink at the origin if \(\alpha = -5\text{.}\) Even though linearization fails to tell us the nature of the equilibrium solution at the origin, we were able to determine the nature of the equilibrium solution with further analysis.

We will now try to exploit what we have learned from our last example and from Hamiltonian systems to see if it is possible to analyze more general systems. If we consider solutions, \((x(t), y(t))\text{,}\) of the system

\begin{align*} \frac{dx}{dt} & = f(x, y)\\ \frac{dy}{dt} & = g(x, y), \end{align*}

we might ask how a function \(V(x, y)\) varies along the solution curve. We already have an answer if our system is Hamiltonian, and \(V\) is the corresponding Hamiltonian function. In this case \(dV/dt = 0\text{.}\) In general, we know that

\begin{align*} \frac{dV}{dt} (x(t), y(t)) & = \frac{\partial V}{\partial x} \frac{dx}{dt} + \frac{\partial V}{\partial y} \frac{dy}{dt}\\ & =\frac{\partial V}{\partial x} f(x, y) + \frac{\partial V}{\partial y} g(x, y). \end{align*}

Thus, if we let

\begin{equation*} \dot{V} =\frac{\partial V}{\partial x} f(x, y) + \frac{\partial V}{\partial y} g(x, y), \end{equation*}

we know that

\begin{equation*} \frac{dV}{dt}(x(t), y(t)) = \dot{V} (x(t), y(t)). \end{equation*}

Thus, \(V\) is increasing along a solution curve if \(\dot{V}(x, y) \gt 0\) and decreasing along a solution curve if \(\dot{V}(x, y) \lt 0\text{.}\) Our example suggests that we can determine this information without finding the solution.

Recall that the gradient of a function \(V: {\mathbb R}^2 \rightarrow {\mathbb R}\) is

\begin{equation*} \nabla V = \left(\frac{\partial V}{\partial x}, \frac{\partial V}{\partial y}\right). \end{equation*}

If

\begin{equation*} {\mathbf F}(x, y) = \begin{pmatrix} f(x, y) \\ g(x, y) \end{pmatrix}, \end{equation*}

then

\begin{equation*} \dot{V} = \nabla V \cdot {\mathbf F}, \end{equation*}

is the directional derivative of \(V\) in the direction of \({\mathbf F}\text{.}\)

Let us use this new information about \(V\) to obtain information about equilibrium solutions of our system. We do know that \(V(x, y)\) graphs as a surface in \({\mathbb R}^3\) and

\begin{equation*} V(x, y) = \text{constant} \end{equation*}

gives the contour lines or level curves of the surface in the \(xy\)-plane. 1  We also know that the gradient of \(V\) points in the direction that \(V\) is increasing the fastest and that the gradient is orthogonal to the level curves of \(V\text{.}\) Thus, if \(\dot{V}(x, y) \gt 0\text{,}\) we know that \(V\) is increasing in the direction of the vector field \({\mathbf F}\) and the elevation of the solution curve through \((x, y)\) in \({\mathbb R}^3\) is increasing. That is, the solution curve is traveling uphill. Similarly, if \(\dot{V}(x, y) \lt 0\text{,}\) we know that the solution curve at \((x, y)\) is going downhill. 2 

Now suppose that \(V\) is a real-valued function defined on a set \(S\) in the \(xy\)-plane, where the point \({\mathbf x}_0 = (x_0, y_0)\) is in \(S\text{.}\) We say that \(V\) is positive definite if \(V({\mathbf x}) \gt 0\) for all \({\mathbf x}\) in \(S\text{,}\) where \({\mathbf x} \neq {\mathbf x}_0\text{,}\) and \(V\) is positive semidefinite if \(V({\mathbf x}) \geq 0\text{.}\) Similarly, we say that \(V\) is negative definite and negative semidefinite if \(V({\mathbf x}) \lt 0\) and \(V({\mathbf x}) \leq 0\text{,}\) respectively.

For example, if we consider the system for a harmonic oscillator

\begin{align*} y' & = v\\ v' & = -\frac{k}{m} y - \frac{b}{m} v, \end{align*}

then the energy function,

\begin{equation*} E(y, v) = \frac{1}{2} mv^2 + \frac{1}{2} ky^2, \end{equation*}

is positive definite.

Recall our example,

\begin{align*} \frac{dx}{dt} & = y + \alpha x(x^2 + y^2)\\ \frac{dy}{dt} & = -x + \alpha y(x^2 + y^2). \end{align*}

The function \(V(x, y) = x^2 + y^2\) is positive definite on \({\mathbb R}^2\text{,}\) with an isolated minimum at the origin. We can compute

\begin{align*} \dot{V}& = \frac{\partial V}{\partial x}(x, y) f(x, y) + \frac{\partial V}{\partial y}(x, y) g(x, y)\\ & = 2x (y + \alpha x(x^2 + y^2) ) + 2y( -x + \alpha y(x^2 + y^2))\\ & = 2 \alpha (x^2 + y^2)^2. \end{align*}

If \(\alpha \lt 0\text{,}\) then \(\dot{V}\) is negative definite on \({\mathbb R}^2\text{.}\) Theorem 5.3.5 tells us that the origin is a stable equilibrium point.

The function \(V\) in Theorem 5.3.5 is called a Lyapunov function. If we compare Theorem 1 to using linearization to determine stability of an equilibrium solution, we will find that we can apply this result where linearization fails. Also, Lyapunov functions are defined on a domain \(U\text{,}\) where linearization only tells us what happens on a small neighborhood around the equilibrium solution. Unfortunately, there are no general ways of finding Lyapunov functions.

Subsection 5.3.3 Gradient Systems

Let \(S\) be a real-valued function on the \(xy\)-plane. The gradient of \(S\) is

\begin{equation*} \nabla S = \left( \frac{\partial S}{\partial x}, \frac{\partial S}{\partial y} \right). \end{equation*}

The system

\begin{align*} \frac{dx}{dt} & = f(x, y)\\ \frac{dy}{dt} & = g(x, y) \end{align*}

is a gradient system if

\begin{align*} f(x, y) & = \frac{\partial S}{\partial x}\\ g(x, y) & = \frac{\partial S}{\partial y}. \end{align*}

For example, the system

\begin{align*} \frac{dx}{dt} & = x - x^3\\ \frac{dy}{dt} & = -y \end{align*}

is a gradient system, where

\begin{equation*} S(x, y) = \frac{x^2}{2} - \frac{x^4}{4} - \frac{y^2}{2} + 8. \end{equation*}

Now, let us see what happens on the solution curves of this gradient system. If \((x(t), y(t))\) is a solution curve,

\begin{equation*} \frac{dS}{dt} = \frac{\partial S}{\partial x} \frac{dx}{dt} + \frac{\partial S}{\partial y} \frac{dy}{dt} = (x - x^3)^2 + y^2 \geq 0. \end{equation*}

Thus, \(S\) increases at the point on the solution curve where the gradient of \(S\) is nonzero. That is, \(S\) increases at every point on the solution curve except at the equilibrium points.

In general, suppose that

\begin{align*} \frac{dx}{dt} & = \frac{\partial S}{\partial x}\\ \frac{dy}{dt} & = \frac{\partial S}{\partial y} \end{align*}

is a gradient system. Since

\begin{equation*} \frac{dS}{dt} (x(t), y(t)) = \frac{\partial S}{\partial x} \frac{dx}{dt} + \frac{\partial S}{\partial y} \frac{dy}{dt} = \left( \frac{\partial S}{\partial x} \right)^2 + \left( \frac{\partial S}{\partial y} \right)^2 \geq 0, \end{equation*}

we know that \(S\) increases on every solution to the system except at the critical points of \(S(x(t), y(t))\text{.}\)

Let us see what this means in terms of the linearization of the system. The Jacobian matrix of \((f, g)\) is

\begin{equation*} J = \begin{pmatrix} f_x & f_y \\ g_x & g_y \end{pmatrix} = \begin{pmatrix} S_{xx} & S_{xy} \\ S_{yx} & S_{yy} \end{pmatrix}. \end{equation*}

Since \(S_{xy} = S_{yx}\text{,}\) the matrix \(J\) must have the form

\begin{equation*} J= \begin{pmatrix} \alpha & \beta \\ \beta & \gamma \end{pmatrix}, \end{equation*}

where \(\alpha = S_{xx}\text{,}\) \(\beta = S_{xy}\text{,}\) and \(\gamma = S_{yy}\text{.}\) The characteristic polynomial of \(J\) is

\begin{equation*} \lambda^2 - (\alpha + \gamma) \lambda + \alpha \gamma - \beta^2. \end{equation*}

Therefore, the eigenvalues are

\begin{equation*} \lambda = \frac{1}{2}( \alpha + \beta) \pm \frac{1}{2} \sqrt{(\alpha - \gamma)^2 + 4 \beta^2}. \end{equation*}

Since we have real eigenvalues, a gradient system can have no spiral sources, spiral sinks, or centers.

For example, \(x^2 + y^2\) is a Lyapunov function for the system

\begin{align*} \frac{dx}{dt} & = -x + y\\ \frac{dy}{dt} & = -x - y. \end{align*}

However, the origin is a spiral sink, so this system cannot be a gradient system.

Subsection 5.3.4 Important Lessons

  • The equation for the nonlinear pendulum with damping

    \begin{align*} \frac{d \theta}{dt} & = v\\ \frac{dv}{dt} & = - \frac{b}{m} v - \frac{g}{L} \sin \theta. \end{align*}
    can be analyzed by examining \(dH/dt\text{,}\) where \(H\) is the Hamiltonian function for the ideal pendulum. The function \(H\) for this system is an example of a Lyapunov function.

  • Let \(V\) be a real-valued function defined on a set \(S\) in the \(xy\)-plane such that the point \({\mathbf x}_0 = (x_0, y_0)\) is in \(S\) and \(V({\mathbf x}_0) = 0\text{.}\)

    • We say that \(V\) is positive definite if \(V({\mathbf x}) \gt 0\) for all \({\mathbf x}\) in \(S\text{,}\) where \({\mathbf x} \neq {\mathbf x}_0\text{.}\)

    • We say that \(V\) is positive semidefinite if \(V({\mathbf x}) \geq 0\) for all \({\mathbf x}\) in \(S\text{.}\)

    • We say that \(V\) is negative definite if \(V({\mathbf x}) \lt 0\) for all \({\mathbf x}\) in \(S\text{,}\) where \({\mathbf x} \neq {\mathbf x}_0\text{.}\)

    • We say that \(V\) is negative semidefinite if \(V({\mathbf x}) \leq 0\) for all \({\mathbf x}\) in \(S\text{.}\)

  • Suppose that the system

    \begin{align*} \frac{dx}{dt} & = f(x, y)\\ \frac{dy}{dt} & = g(x, y) \end{align*}

    has an equilibrium solution at \((x_0, y_0)\text{.}\) Let \(V\) be a continuously differentiable function defined on a neighborhood \(U\) of \((x_0, y_0)\) that is positive definite with minimum at \((x_0, y_0)\text{.}\)

    1. If \(\dot{V}\) is negative semidefinite on \(U\text{,}\) then \((x_0, y_0)\) is a stable equilibrium solution. That is, any solution that starts near the equilibrium solution will stay near the equilibrium solution.

    2. If \(\dot{V}\) is negative definite on \(U\text{,}\) then \((x_0, y_0)\) is an asymptotically stable equilibrium solution or a sink.

    We can use these results to analyze the behavior of equilibrium solutions where linearization fails. The function \(V\) is called a Lyapunov function. We have no general methods for finding Lyapunov functions.

  • The system

    \begin{align*} \frac{dx}{dt} & = f(x, y)\\ \frac{dy}{dt} & = g(x, y) \end{align*}
    is a gradient system if
    \begin{align*} f(x, y) & = \frac{\partial S}{\partial x}\\ g(x, y) & = \frac{\partial S}{\partial y}, \end{align*}
    where \(S\) is a real-valued function on the \(xy\)-plane. Since
    \begin{equation*} \frac{dS}{dt} (x(t), y(t)) = \left( \frac{\partial S}{\partial x} \right)^2 + \left( \frac{\partial S}{\partial y} \right)^2 \geq 0, \end{equation*}
    \(S\) increases on every solution to the system except at the critical points of \(S(x(t), y(t))\text{.}\) Since the eigenvalues of a gradient system are real, a gradient system has no spiral sources, spiral sinks, or centers.

Reading Questions 5.3.5 Reading Questions

1.

Explain in your own words what a Lyapunov function is.

2.

Expalin in your own words what a gradient system is.

Exercises 5.3.6 Exercises

1.

This is an exercise.

You have attempted of activities on this page.
See Figures 1 and 2 in John Polking, Albert Boggess, and David Arnold. {\it Differential Equations}. Prentice Hall, Upper Saddle River, NJ, 2001, p. 611.
The argument that we have made here also works in higher dimensions.