Skip to main content

Section 3.6 Changing Coordinates

In the previous sections of this chapter, we outlined procedures for solving systems of linear differential equations of the form
\begin{equation*} \begin{pmatrix} dx/dt \\ dy/dt \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = A \begin{pmatrix} x \\ y \end{pmatrix} \end{equation*}
by determining the eigenvalues of \(A\text{.}\) In this section we will consider the following special cases for \(A\text{,}\)
\begin{equation} \begin{pmatrix} \lambda & 0 \\ 0 & \mu \end{pmatrix}, \begin{pmatrix} \alpha & \beta \\ -\beta & \alpha \end{pmatrix}, \begin{pmatrix} \lambda & 0 \\ 0 & \lambda \end{pmatrix}, \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}.\tag{3.6.1} \end{equation}
Although it may seem that we have limited ourselves by attacking only a very small part of the problem of finding solutions for \(\mathbf x' = A \mathbf x\text{,}\) we are actually very close to providing a complete classification of all solutions. We will now show that we can transform any \(2 \times 2 \) system of first-order linear differential equations with constant coefficients into one of these special systems by using a change of coordinates.

Subsection 3.6.1 Linear Maps

First, we need to add a few things to our knowledge of matrices and linear algebra. A linear map or linear transformation on \({\mathbb R}^2\) is a function \(T: {\mathbb R}^2 \to {\mathbb R}^2\) that is defined by a matrix. That is,
\begin{equation*} T \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}. \end{equation*}
When there is no confusion, we will think of the linear map \(T: {\mathbb R}^2 \to {\mathbb R}^2\) and the matrix
\begin{equation*} \begin{pmatrix} a & b \\ c & d \end{pmatrix} \end{equation*}
as interchangeable.
We will say that \(T: {\mathbb R}^2 \to {\mathbb R}^2\) is an invertible linear map if we can find a second linear map \(S\) such that \(T \circ S = S \circ T = I\text{,}\) where \(I\) is the identity transformation. In terms of matrices, this means that we can find a matrix \(S\) such that
\begin{equation*} TS = ST = I, \end{equation*}
where
\begin{equation*} I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \end{equation*}
is the \(2 \times 2\) identity matrix. We write \(T^{-1}\) for the inverse matrix of \(T\text{.}\) It is easy to check that the inverse of
\begin{equation*} T = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \end{equation*}
is
\begin{equation*} T^{-1} = \frac{1}{\det T} \begin{pmatrix} d & - b \\ -c & a \end{pmatrix}. \end{equation*}

Proof.

If \(\det T = 0\text{,}\) then there are infinitely many nonzero vectors \({\mathbf x}\) such that \(T {\mathbf x} = {\mathbf 0}\text{.}\) Suppose that \(T^{-1}\) exists and \({\mathbf x} \neq {\mathbf 0}\) such that \(T {\mathbf x} = {\mathbf 0}\text{.}\) Then
\begin{equation*} {\mathbf x} = T^{-1} T {\mathbf x} = T^{-1} {\mathbf 0} = {\mathbf 0}, \end{equation*}
which is a contradiction. On the other hand, we can certainly compute \(T^{-1}\text{,}\) at least in the \(2 \times 2\) case, if the determinant is nonzero.

Subsection 3.6.2 Changing Coordinates

In Subsection 3.1.2, we discussed what a basis was along with the coordinates with respect to a particular basis. The vectors \({\mathbf e}_1 = (1, 0)\) and \({\mathbf e}_2 = (0, 1)\) form a basis for \({\mathbb R}^2\text{.}\) Indeed, if \({\mathbf z} = (-5, -4)\text{,}\) then we can write
\begin{equation*} {\mathbf z} = -5 {\mathbf e}_1 - 4 {\mathbf e}_2. \end{equation*}
We say that the coordinates of \(\mathbf z\) with respect to the basis \(\{ {\mathbf e}_1, {\mathbf e}_2 \}\) are \((-5,-4)\text{.}\) Now consider the vectors \({\mathbf v}_1 = (2,1)\) and \({\mathbf v}_2 = (3, 2)\text{.}\) Since
\begin{equation*} \det \begin{pmatrix} 2 & 3 \\ 1 & 2 \end{pmatrix} \neq 0, \end{equation*}
these vectors are linearly independent form a different basis for \({\mathbb R}^2\text{.}\) If \({\mathbf z} = (-5, -4)\text{,}\) then we can write
\begin{equation*} {\mathbf z} = 2 {\mathbf v}_1 - 3 {\mathbf v}_2. \end{equation*}
The coordinates of \({\mathbf z}\) with respect to the basis \(\{ {\mathbf v}_1, {\mathbf v}_2 \}\) are \((2, -3)\text{.}\)
Suppose we wish to convert the coordinates with repect to one basis to a new set of coordinates with respect to a different basis; that is, we wish to do a change of coordinates. Observe that
\begin{align*} \mathbf v_1 & = 2 \mathbf e_1 + \mathbf e_2\\ \mathbf v_2 & = 3 \mathbf e_1 + 2 \mathbf e_2. \end{align*}
It follows that
\begin{align*} c_1 \mathbf v_1 + c_2 \mathbf v_2 & = c_1 (2 \mathbf e_1 + \mathbf e_2) + c_2( 3\mathbf e_1 + 2 \mathbf e_2)\\ & = (2c_1 + 3c_2 )\mathbf e_1 + (c_2 + 2c_2) \mathbf e_2. \end{align*}
Thus, the coordinates of \(c_1 \mathbf v_1 + c_2 \mathbf v_2\) with respect to the basis \(\{ {\mathbf e}_1, {\mathbf e}_2 \}\) can be determined by
\begin{equation*} \begin{pmatrix} 2c_1 + 3c_2 \\ c_2 + 2c_2 \end{pmatrix} = \begin{pmatrix} 2 & 3 \\ 1 & 2 \end{pmatrix} \begin{pmatrix} c_1 \\ c_2 \end{pmatrix}. \end{equation*}
If we let
\begin{equation*} T = \begin{pmatrix} 2 & 3 \\ 1 & 2 \end{pmatrix} \quad \text{and} \quad \mathbf c = \begin{pmatrix} c_1 \\ c_2 \end{pmatrix}, \end{equation*}
then the coordinates with respect to the basis \(\{ {\mathbf e}_1, {\mathbf e}_2 \}\) are given by \(\mathbf d = T \mathbf c\text{.}\) If we are given the coordinates with respect to the basis \(\{ {\mathbf v}_1, {\mathbf v}_2 \}\) for a vector, we simply need to multiply by the matrix \(T\text{.}\)
Now suppose that we wish to find the coordinates with respect to the basis \(\{ \mathbf v_1, \mathbf v_2\}\) if we know that a vector \(\mathbf z = d_1 \mathbf e_1 + d_2 \mathbf e_2\text{.}\) Since \(\mathbf d = T \mathbf c\text{,}\) we need only multiply both sides of the equation by \(T^{-1}\) to get \(\mathbf c = T^{-1} \mathbf d\text{.}\) In our example,
\begin{equation*} T^{-1} \mathbf d = \begin{pmatrix} 2 & - 3 \\ -1 & 2\end{pmatrix} \begin{pmatrix} d_1 \\ d_2 \end{pmatrix}. \end{equation*}
In our particular example,
\begin{equation*} T^{-1} \mathbf d = \begin{pmatrix} 2 & - 3 \\ -1 & 2\end{pmatrix} \begin{pmatrix} -5 \\ -4 \end{pmatrix} = \begin{pmatrix} 2 \\ -3 \end{pmatrix}, \end{equation*}
which are the coordinates of \(\mathbf z\) with respect to the basis \(\{ {\mathbf v}_1, {\mathbf v}_2 \}\text{.}\)

Subsection 3.6.3 Systems and Changing Coordinates

The idea now is to use a change of coordinates to convert an arbitrary system \({\mathbf x}' = A {\mathbf x}\) into one of the special systems mentioned at the beginning of the section (3.6.1), solve the new system, and then convert our new solution back to a solution of the original system using another change of coordinates.
Suppose that we consider a linear system
\begin{equation} {\mathbf y}' = (T^{-1} A T) {\mathbf y}\tag{3.6.2} \end{equation}
where \(T\) is an invertible matrix. If \({\mathbf y}(t)\) is a solution of (3.6.2), we claim that \({\mathbf x}(t) = T {\mathbf y}(t)\) solves the equation \({\mathbf x}' = A {\mathbf x}\text{.}\) Indeed,
\begin{align*} {\mathbf x}'(t) & = (T {\mathbf y})'(t)\\ & = T {\mathbf y}'(t)\\ & = T( (T^{-1} A T) {\mathbf y}(t))\\ & = A (T {\mathbf y}(t))\\ & = A {\mathbf x}(t). \end{align*}
We can think of this in two ways.
  1. A linear map \(T\) converts solutions of \({\mathbf y}' = (T^{-1} A T) {\mathbf y}\) to solutions of \({\mathbf x}' = A {\mathbf x}\text{.}\)
  2. The inverse of a linear map \(T\) takes solutions of \({\mathbf x}' = A {\mathbf x}\) to solutions of \({\mathbf y}' = (T^{-1} A T) {\mathbf y}\text{.}\)
We are now in a position to solve our problem of finding solutions of an arbitrary linear system
\begin{equation*} \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} x \\ y \end{pmatrix}. \end{equation*}

Subsection 3.6.4 Distinct Real Eigenvalues

Consider the system \({\mathbf x}' = A {\mathbf x}\text{,}\) where \(A\) has two real, distinct eigenvalues \(\lambda_1\) and \(\lambda_2\) with eigenvectors \({\mathbf v}_1\) and \({\mathbf v}_2\text{,}\) respectively. Let \(T\) be the matrix with columns \({\mathbf v}_1\) and \({\mathbf v}_2\text{.}\) If \({\mathbf e}_1 = (1, 0)\) and \({\mathbf e}_2 = (0, 1)\text{,}\) then \(T {\mathbf e}_i = {\mathbf v}_i\) for \(i = 1, 2\text{.}\) Consequently, \(T^{-1} {\mathbf v}_i = {\mathbf e}_i\) for \(i = 1, 2\text{.}\) Thus, we have
\begin{align*} (T^{-1} A T) {\mathbf e}_i & = T^{-1} A {\mathbf v}_i\\ & = T^{-1} (\lambda_i {\mathbf v}_i)\\ & = \lambda_i T^{-1} {\mathbf v}_i\\ & = \lambda_i {\mathbf e}_i \end{align*}
for \(i = 1, 2\text{.}\) Therefore, the matrix \(T^{-1} A T\) is in canonical form,
\begin{equation*} T^{-1} A T = \begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{pmatrix}. \end{equation*}
The eigenvalues of the matrix \(T^{-1} A T\) are \(\lambda_1\) and \(\lambda_2\) with eigenvectors \((1, 0)\) and \((0, 1)\text{,}\) respectively. Thus, the general solution of
\begin{equation*} {\mathbf y}' = (T^{-1}AT) {\mathbf y} \end{equation*}
is
\begin{equation*} {\mathbf y}(t) = \alpha e^{\lambda_1 t} \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \beta e^{\lambda_2 t} \begin{pmatrix} 0\\ 1 \end{pmatrix}. \end{equation*}
Hence, the general solution of
\begin{equation*} {\mathbf x}' = A {\mathbf x} \end{equation*}
is
\begin{align*} T {\mathbf y}(t) & = T \left( \alpha e^{\lambda_1 t} \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \beta e^{\lambda_2 t} \begin{pmatrix} 0 \\ 1 \end{pmatrix} \right)\\ & = \alpha e^{\lambda_1 t} T \begin{pmatrix} 1 \\ 0\end{pmatrix} + \beta e^{\lambda_2 t} T \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\ & = \alpha e^{\lambda_2 t} \mathbf v_1 + \beta e^{\lambda_2 t} \mathbf v_2. \end{align*}

Example 3.6.2.

Suppose \(d{\mathbf x}/dt = A {\mathbf x}\text{,}\) where
\begin{equation*} A = \begin{pmatrix} 1 & 2 \\ 4 & 3 \end{pmatrix}. \end{equation*}
The eigenvalues of \(A\) are \(\lambda_1 = 5\) and \(\lambda_2 = -1\) and the associated eigenvectors are \((1, 2)\) and \((1, -1)\text{,}\) respectively. In this case, our matrix \(T\) is
\begin{equation*} \begin{pmatrix} 1 & 1 \\ 2 & -1 \end{pmatrix}. \end{equation*}
If \({\mathbf e}_1 = (1, 0)\) and \({\mathbf e}_2 = (0, 1)\text{,}\) then \(T {\mathbf e}_i = {\mathbf v}_i\) for \(i = 1, 2\text{.}\) Consequently, \(T^{-1} {\mathbf v}_i = {\mathbf e}_i\) for \(i = 1, 2\text{,}\) where
\begin{equation*} T^{-1} = \begin{pmatrix} 1/3 & 1/3 \\ 2/3 & -1/3 \end{pmatrix}. \end{equation*}
Thus,
\begin{equation*} T^{-1} A T = \begin{pmatrix} 1/3 & 1/3 \\ 2/3 & -1/3 \end{pmatrix} \begin{pmatrix} 1 & 2 \\ 4 & 3 \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 2 & -1 \end{pmatrix} = \begin{pmatrix} 5 & 0 \\ 0 & -1 \end{pmatrix}. \end{equation*}
The eigenvalues of the matrix
\begin{equation*} \begin{pmatrix} 5 & 0 \\ 0 & -1 \end{pmatrix} \end{equation*}
are \(\lambda_1 = 5\) and \(\lambda_2 = -1\) with eigenvectors \((1, 0)\) and \((0, 1)\text{,}\) respectively. Thus, the general solution of
\begin{equation*} {\mathbf y}' = (T^{-1}AT) {\mathbf y} \end{equation*}
is
\begin{equation*} {\mathbf y}(t) = \alpha e^{5t} \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \beta e^{-t} \begin{pmatrix} 0\\ 1 \end{pmatrix}. \end{equation*}
Hence, the general solution of
\begin{equation*} {\mathbf x}' = A {\mathbf x} \end{equation*}
is
\begin{align*} T {\mathbf y}(t) & = \begin{pmatrix} 1 & 1 \\ 2 & -1 \end{pmatrix} \left( \alpha e^{5t} \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \beta e^{-t} \begin{pmatrix} 0 \\ 1 \end{pmatrix} \right)\\ & = \alpha e^{5t} \begin{pmatrix} 1 \\ 2 \end{pmatrix} + \beta e^{-t} \begin{pmatrix} 1 \\ -1 \end{pmatrix} \end{align*}
The linear map \(T\) converts the phase portrait of the system \({\mathbf y}' = (T^{-1}AT) {\mathbf y}\) (Figure 3.6.3) to the phase portrait of the system \({\mathbf x}' = A {\mathbf x}\) (Figure 3.6.4).
a direction field of slope arrows and solution curves in each quadrant with the solution curves approaching the horizontal and vertical axes for large values
Figure 3.6.3. Phase portrait for \({\mathbf y}' = (T^{-1}AT) {\mathbf y}\)
a direction field of slope arrows and solution curves that approach the straight-line solutions for large values
Figure 3.6.4. Phase portrait for \({\mathbf x}' = A {\mathbf x}\)

Activity 3.6.1. Distinct Real Eigenvalues and Transformation of Coordinates.

Consider the system of linear differential equations \(d\mathbf x/dt = A \mathbf x\text{,}\) where
\begin{equation*} A = \begin{pmatrix} 1 \amp 3 \\ 1 \amp -1 \end{pmatrix}. \end{equation*}
(a)
Find the eigenvalues of \(A\text{.}\) You should find distinct real eigenvalues \(\lambda\) and \(\mu\text{.}\)
(b)
Find the general solution for \(d\mathbf x/dt = A \mathbf x\text{.}\)
(c)
Construct the \(2 \times 2\) matrix \(T = (\mathbf v_1, \mathbf v_2)\) and find \(T^{-1}\text{.}\)
(d)
Calculate \(T^{-1}AT\text{.}\) You should obtain the diagonal matrix
\begin{equation*} \begin{pmatrix} \lambda & 0 \\ 0 & \mu \end{pmatrix} \end{equation*}
with eigenvectors \(\mathbf e_1 = (1,0)\) and \(\mathbf e_2 = (0,1)\text{.}\)
(e)
The general solution of
\begin{equation*} {\mathbf y}' = (T^{-1}AT) {\mathbf y} \end{equation*}
is
\begin{equation*} {\mathbf y}(t) = \alpha e^{\lambda t} \begin{pmatrix} 1 \\ 0 \end{pmatrix} + \beta e^{\mu t} \begin{pmatrix} 0\\ 1 \end{pmatrix}. \end{equation*}
Now calculate \(T \mathbf y\) and compare this solution with the one that you obtained in Activity 3.2.1. 1 

Subsection 3.6.5 Complex Eigenvalues

Suppose the matrix
\begin{equation*} A = \begin{pmatrix} a \amp b \\ c \amp d \end{pmatrix} \end{equation*}
in system \({\mathbf x}' = A {\mathbf x}\) has complex eigenvalues. In this case, the characteristic polynomial \(p(\lambda) = \lambda^2 - (a + d)\lambda + (ad - bc)\) will have roots \(\lambda = \alpha + i \beta\) and \(\overline{\lambda} = \alpha - i \beta\text{,}\) where
\begin{align*} \alpha \amp = \frac{a + d}{2}\\ \beta \amp = \frac{\sqrt{4bc - (a - d)^2}}{2}. \end{align*}
The eigenvalues \(\lambda\) and \(\overline{\lambda}\) are complex conjugates. Now, suppose that the eigenvalue \(\lambda = \alpha + i \beta\) has an eigenvector of the form
\begin{equation*} \mathbf v = {\mathbf v}_ 1 + i {\mathbf v}_2, \end{equation*}
where \(\mathbf v_1\) and \(\mathbf v_2\) are real vectors. Then \(\overline{\mathbf v} = {\mathbf v}_ 1 - i {\mathbf v}_2\) is an eigenvector for \(\overline{\lambda}\text{,}\) since
\begin{equation*} A \overline{\mathbf v} = \overline{A \mathbf v} = \overline{\lambda \mathbf v} = \overline{\lambda} \overline{\mathbf v}. \end{equation*}
Consequently, if \(A\) is a real matrix with complex eigenvalues, one of the eigenvalues determines the other.

Proof.

If \({\mathbf v}_1\) and \({\mathbf v}_2\) are not linearly independent, then \({\mathbf v}_1 = c {\mathbf v}_2\) for some \(c \in \mathbb R\text{.}\) On one hand, we have
\begin{equation*} A ({\mathbf v}_ 1 + i {\mathbf v}_2) = A (c {\mathbf v}_2 + i {\mathbf v}_2) = (c + i) A {\bf v}_2. \end{equation*}
However,
\begin{align*} A ({\mathbf v}_ 1 + i {\mathbf v}_2) & = (\alpha + i \beta) ( {\mathbf v}_ 1 + i {\mathbf v}_2)\\ & = (\alpha + i \beta) ( c + i) {\mathbf v}_2\\ & = ( c + i) (\alpha + i \beta) {\mathbf v}_2 \end{align*}
In other words, \(A {\mathbf v}_2 = (\alpha + i \beta) {\mathbf v}_2\text{.}\) However, this is a contradiction since the left-side of the equation says that we have real eigenvector while the right-side of the equation is complex. Thus, \({\mathbf v}_1\) and \({\mathbf v}_2\) are linearly independent.

Proof.

Since \({\mathbf v}_1 + i {\mathbf v}_2\) is an eigenvector associated to the eigenvalue \(\alpha + i \beta\text{,}\) we have
\begin{equation*} A ( {\mathbf v}_1 + i {\mathbf v}_2) = (\alpha + i \beta) ({\mathbf v}_1 + i {\mathbf v}_2). \end{equation*}
Equating the real and imaginary parts, we find that
\begin{align*} A {\mathbf v}_1 & = \alpha {\mathbf v}_1 - \beta {\mathbf v}_2\\ A {\mathbf v}_2 & = \beta {\mathbf v}_1 + \alpha {\mathbf v}_2. \end{align*}
If \(T\) is the matrix with columns \({\mathbf v}_1\) and \({\mathbf v}_2\text{,}\) then
\begin{align*} T {\mathbf e}_1 & = {\mathbf v}_1\\ T {\mathbf e}_2 & = {\mathbf v}_2. \end{align*}
Thus, we have
\begin{equation*} (T^{-1} A T) {\mathbf e}_1 = T^{-1} (\alpha {\mathbf v}_1 - \beta {\mathbf v}_2) = \alpha {\mathbf e}_1 - \beta {\mathbf e}_2. \end{equation*}
Similarly,
\begin{equation*} (T^{-1} A T) {\mathbf e}_2 = \beta {\mathbf e}_1 + \alpha {\mathbf e}_2. \end{equation*}
Therefore, we can write the matrix \(T^{-1}A T\) as
\begin{equation*} T^{-1} AT = \begin{pmatrix} \alpha & \beta \\ - \beta & \alpha \end{pmatrix}. \end{equation*}
The system \({\mathbf y}' = (T^{-1} AT ) {\mathbf y}\) is in one of the canonical forms and has a phase portrait that is a spiral sink (\(\alpha \lt 0\)), a center (\(\alpha = 0\)), or a spiral source (\(\alpha \gt 0\)). After a change of coordinates, the phase portrait of \({\mathbf x}' = A {\mathbf x}\) is equivalent to a sink, center, or source.

Example 3.6.7.

Suppose that we wish to find the solutions of the second order equation
\begin{equation*} 2x'' + 2x' + x = 0. \end{equation*}
This particular equation might model a damped harmonic oscillator. If we rewrite this second-order equation as a first-order system, we have
\begin{align*} x' & = y\\ y' & = - \frac{1}{2} x - y, \end{align*}
or equivalently \(\mathbf x' = A \mathbf x\text{,}\) where
\begin{equation*} A = \begin{pmatrix} 0 & 1 \\ - 1/2 & - 1 \end{pmatrix}. \end{equation*}
The eigenvalues of \(A\) are
\begin{equation*} - \frac{1}{2} \pm i \frac{1}{2}. \end{equation*}
The eigenvalue \(\lambda = (1 + i)/2\) has an eigenvector
\begin{equation*} \mathbf v = \begin{pmatrix} 2 \\ -1 + i \end{pmatrix} = \begin{pmatrix} 2 \\ -1 \end{pmatrix} + i \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \end{equation*}
respectively. Therefore, we can take \(T\) to be the matrix
\begin{equation*} T = \begin{pmatrix} 2 \amp 0 \\ -1 \amp 1 \end{pmatrix}. \end{equation*}
Consequently,
\begin{equation*} T^{-1} A T = \begin{pmatrix} 1/2 & 0 \\ 1/2 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ -1/2 & -1 \end{pmatrix} \begin{pmatrix} 2 & 0 \\ -1 & 1 \end{pmatrix} = \begin{pmatrix} -1/2 & 1/2 \\ -1/2 & -1/2 \end{pmatrix}, \end{equation*}
which is in the canonical form
\begin{equation*} \begin{pmatrix} \alpha & \beta \\ - \beta & \alpha \end{pmatrix}. \end{equation*}
The general solution to \({\mathbf y}' = (T^{-1} A T) {\mathbf y}\) is
\begin{equation*} {\mathbf y}(t) = c_1 e^{-t/2} \begin{pmatrix} \cos(t/2) \\ -\sin(t/2) \end{pmatrix} + c_2 e^{-t/2} \begin{pmatrix} \sin(t/2) \\ \cos(t/2) \end{pmatrix}. \end{equation*}
The phase portrait of \({\mathbf y}' = (T^{-1} A T) {\mathbf y}\) is given in Figure 3.6.8.
a direction field of slope arrows and solution curves that spiral towards the origin
Figure 3.6.8. Phase portrait for \({\mathbf y}' = (T^{-1} A T) {\mathbf y}\)
The general solution of \({\mathbf x}' = A {\mathbf x}\) is
\begin{align*} T {\mathbf y}(t) & = \begin{pmatrix} 2 & 0 \\ -1 & 1 \end{pmatrix} \left[ c_1 e^{-t/2} \begin{pmatrix} \cos(t/2) \\ -\sin(t/2) \end{pmatrix} + c_2 e^{-t/2} \begin{pmatrix} \sin(t/2) \\ \cos(t/2) \end{pmatrix} \right]\\ & = c_1 e^{-t/2} \begin{pmatrix} 2 & 0 \\ -1 & 1 \end{pmatrix} \begin{pmatrix} \cos(t/2) \\ -\sin(t/2) \end{pmatrix} + c_2 e^{-t/2} \begin{pmatrix} 2 & 0 \\ -1 & 1 \end{pmatrix} \begin{pmatrix} \sin(t/2) \\ \cos(t/2) \end{pmatrix}\\ & = c_1 e^{-t/2} \begin{pmatrix} 2 \cos(t/2) \\ - \cos(t/2) - \sin(t/2) \end{pmatrix} + c_2 e^{-t/2} \begin{pmatrix} 2 \sin(t/2) \\ - \sin(t/2) + \cos(t/2) \end{pmatrix}. \end{align*}
The phase portrait for this system is given in Figure 3.6.9.
a direction field of slope arrows and solution curves that spiral towards the origin
Figure 3.6.9. Phase portrait of \({\mathbf x}' = A {\mathbf x}\)

Remark 3.6.10.

Of course, we have a much more efficient way of solving the system \({\mathbf x}' = A {\mathbf x}\text{,}\) where
\begin{equation*} A = \begin{pmatrix} 0 & 1 \\ - 1/2 & - 1 \end{pmatrix}. \end{equation*}
Since \(A\) has eigenvalue \(\lambda = (-1 + i)/2\) with an eigenvector \(\mathbf v = (2, -1 + i)\text{,}\) we can apply Euler’s formula and write the solution as
\begin{align*} \mathbf x(t) \amp = e^{(-1 + i)t/2} \mathbf v \amp\\ \amp = e^{-t/2} e^{it/2} \begin{pmatrix} 2 \\ -1 + i \end{pmatrix}\\ \amp = e^{-t/2} (\cos(t/2) + i \sin(t/2)) \begin{pmatrix} 2 \\ -1 + i \end{pmatrix}\\ \amp =e^{-t/2} \begin{pmatrix} 2 \cos(t/2) \\ - \cos(t/2) - \sin(t/2) \end{pmatrix} + i e^{-t/2} \begin{pmatrix} 2 \sin(t/2) \\ -\sin(t/2) + \cos(t/2) \end{pmatrix}. \end{align*}
Taking the real and the imaginary parts of the last expression, the general solution of \({\mathbf x}' = A {\mathbf x}\) is
\begin{equation*} \mathbf x(t) = c_1 e^{-t/2} \begin{pmatrix} 2 \cos(t/2) \\ - \cos(t/2) - \sin(t/2) \end{pmatrix} + c_2 e^{-t/2} \begin{pmatrix} 2 \sin(t/2) \\ - \sin(t/2) + \cos(t/2) \end{pmatrix}, \end{equation*}
which agrees with the solution that we found by transforming coordinates.

Subsection 3.6.6 Repeated Eigenvalues

Now suppose that \(A\) has a single real eigenvalue \(\lambda\text{.}\) Then the characteristic polynomial of \(A\) is \(p(\lambda) = \lambda^2 - (a + d)\lambda + (ad - bc)\text{,}\) then \(A\) has an eigenvalue \(\lambda = (a + d)/2\text{.}\)

Proof.

Suppose that \({\mathbf u}\) and \({\mathbf v}\) are linearly indeendent eigenvectors for \(A\text{,}\) and let \(T\) be the matrix whose first column is \({\mathbf u}\) and second column is \({\mathbf v}\text{.}\) That is, \(T {\mathbf e}_1 = {\mathbf u}\) and \(T{\mathbf e}_2 = {\mathbf v}\text{.}\) Since \({\mathbf u}\) and \({\mathbf v}\) are linearly independent, \(\det(T) \neq 0\) and \(T\) is invertible. So, it must be the case that
\begin{equation*} AT = (A {\mathbf u}, A {\mathbf v}) = (\lambda {\mathbf u}, \lambda {\mathbf v}) = \lambda ({\mathbf u}, {\mathbf v}) = \lambda IT, \end{equation*}
or
\begin{equation*} A = \begin{pmatrix} \lambda & 0 \\ 0 & \lambda \end{pmatrix}. \end{equation*}
In this case, the system is uncoupled and is easily solved. That is, we can solve each equation in the system
\begin{align*} x' \amp = \lambda x\\ y' \amp = \lambda y \end{align*}
separately to obtain the general solution
\begin{align*} x \amp = c_1 e^{\lambda t}\\ y' \amp = c_2 e^{\lambda t}. \end{align*}

Proof.

If \({\mathbf w}\) is another vector in \({\mathbb R}^2\) such that \({\mathbf v}\) and \({\mathbf w}\) are linearly independent, then \(A \mathbf w\) can be written as a linear combination of \(\mathbf v\) and \(\mathbf w\text{,}\)
\begin{equation*} A {\mathbf w} = \alpha {\mathbf v} + \beta {\mathbf w}. \end{equation*}
We can assume that \(\alpha \neq 0\text{;}\) otherwise, we would have a second linearly independent eigenvector. We claim that \(\beta = \lambda\text{.}\) If this were not the case, then
\begin{align*} A \left( {\mathbf w} + \left( \frac{\alpha}{\beta - \lambda} \right) {\mathbf v} \right) \amp = A {\mathbf w} + \left( \frac{\alpha}{\beta - \lambda} \right) A {\mathbf v}\\ \amp = \alpha {\mathbf v} + \beta {\mathbf w} + \lambda \left( \frac{\alpha}{\beta - \lambda} \right) {\mathbf v}\\ \amp = \beta {\mathbf w} + \alpha \left(1 + \frac{\lambda}{\beta - \lambda} \right) {\mathbf v}\\ \amp = \beta {\mathbf w} + \alpha \left( \frac{\beta - \lambda + \lambda}{\beta - \lambda} \right) {\mathbf v}\\ \amp = \beta \left( {\mathbf w} + \left( \frac{\alpha}{\beta - \lambda} \right) {\mathbf v} \right) \end{align*}
and \(\beta\) would be an eigenvalue distinct from \(\lambda\text{.}\) Thus, \(A {\mathbf w} = \alpha {\mathbf v} + \lambda {\mathbf w}\text{.}\) If we will let \({\mathbf u} = (1/ \alpha) {\mathbf w}\text{,}\) then
\begin{equation*} A {\mathbf u} = {\mathbf v} + \frac{\lambda}{\alpha} {\mathbf w} = {\mathbf v} + \lambda {\mathbf u}. \end{equation*}
We now define \(T {\mathbf e}_1 = {\mathbf v}\) and \(T{\mathbf e}_2 = {\mathbf u}\text{.}\) Since
\begin{align*} AT \amp = A\mathbf u + A \mathbf v = \mathbf v + \lambda \mathbf u + \lambda \mathbf v\\ T\begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix} \amp = T (\lambda \mathbf e_1) + T \mathbf e_1 + T (\lambda \mathbf e_2) = \mathbf v + \lambda \mathbf u + \lambda \mathbf v, \end{align*}
we have
\begin{equation*} T^{-1} A T = \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}. \end{equation*}
Therefore, \({\mathbf x}' = A {\mathbf x}\) is in canonical form after a change of coordinates.

Example 3.6.13.

Consider the system \(\mathbf x' = A \mathbf x\text{,}\) where
\begin{equation*} A = \begin{pmatrix} 5 & 1 \\ -4 & 1 \end{pmatrix}. \end{equation*}
The characteristic polynomial of \(A\) is \(\lambda^2 - 6 \lambda + 9 = (\lambda - 3)^2\text{,}\) we have only a single eigenvalue \(\lambda = 3\) with eigenvector \(\mathbf v = (1, -2)\text{.}\) Any other eigenvector for \(\lambda\) is a multiple of \(\mathbf v\text{.}\) If we choose \(\mathbf w = (1, 0)\text{,}\) then \(\mathbf v\) and \(\mathbf w\) are linearly independent. Furthermore,
\begin{equation*} A \mathbf w = \begin{pmatrix} 5 \\ - 4 \end{pmatrix} = 2 \begin{pmatrix} 1 \\ -2 \end{pmatrix} + \lambda \begin{pmatrix} 1 \\ 0 \end{pmatrix} = 2 \begin{pmatrix} 1 \\ -2 \end{pmatrix} + 3 \begin{pmatrix} 1 \\ 0 \end{pmatrix}. \end{equation*}
So we can let \(\mathbf u = (1/2) \mathbf w = (1/2, 0)\text{.}\) Therefore, the matrix that we seek is
\begin{equation*} T = \begin{pmatrix} 1 \amp 1/2 \\ -2 \amp 0 \end{pmatrix}, \end{equation*}
and
\begin{equation*} T^{-1} A T = \begin{pmatrix} -1/2 & 2 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} 5 & 1 \\ -4 & 1 \end{pmatrix} \begin{pmatrix} 1 \amp 1/2 \\ -2 \amp 0\end{pmatrix} = \begin{pmatrix} 3 & 1 \\ 0 & 3 \end{pmatrix}. \end{equation*}
From Section 3.3, we know that the general solution to the system
\begin{equation*} \begin{pmatrix} dx/dt \\ dy/dt \end{pmatrix} = \begin{pmatrix} 3 & 1 \\ 0 & 3 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \end{equation*}
is
\begin{equation*} \mathbf y(t) = c_1 e^{3t} \begin{pmatrix} 1 \\ 0 \end{pmatrix} + c_2 e^{3t} \begin{pmatrix} t \\ 1 \end{pmatrix}. \end{equation*}
Therefore, the general solution to
\begin{equation*} \begin{pmatrix} dx/dt \\ dy/dt \end{pmatrix} = \begin{pmatrix} 5 & 1 \\ -4 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} \end{equation*}
is
\begin{align*} \mathbf x(t) \amp = T \mathbf y(t)\\ \amp = c_1 e^{3t} T \begin{pmatrix} 1 \\ 0 \end{pmatrix} + c_2 e^{3t} T \begin{pmatrix} t \\ 1 \end{pmatrix}\\ \amp = c_1 e^{3t} \begin{pmatrix} 1 \\ -2 \end{pmatrix} + c_2 e^{3t} \begin{pmatrix} 1/2 + t \\ -2t \end{pmatrix}. \end{align*}
This solution agrees with the solution that we found in Example 3.5.5.
In practice, we find solutions to linear systems using the methods that we outlined in Sections 3.2–3.4. What we have demonstrated in this section is that those solutions are exactly the ones that we want.

Subsection 3.6.7 Important Lessons

  • A linear map \(T\) is invertible if and only if \(\det T \neq 0\text{.}\)
  • A linear map \(T\) converts solutions of \({\mathbf y}' = (T^{-1} A T) {\mathbf y}\) to solutions of \({\mathbf x}' = A {\mathbf x}\text{.}\)
  • The inverse of a linear map \(T\) takes solutions of \({\mathbf x}' = A {\mathbf x}\) to solutions of \({\mathbf y}' = (T^{-1} A T) {\mathbf y}\text{.}\)
  • A change of coordinates converts the system \({\mathbf x}' = A {\mathbf x}\) to one of the following special cases,
    \begin{equation*} \begin{pmatrix} \lambda & 0 \\ 0 & \mu \end{pmatrix}, \begin{pmatrix} \alpha & \beta \\ -\beta & \alpha \end{pmatrix}, \begin{pmatrix} \lambda & 0 \\ 0 & \lambda \end{pmatrix}, \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}. \end{equation*}

Reading Questions 3.6.8 Reading Questions

1.

Explain what a change of coordinates is.

2.

Given a \(2 \times 2\) linear system, what are the possible types of solutions?

Exercises 3.6.9 Exercises

Diagonalizing Matrices with Distinct Real Eigenvalues.

For each of the matrices \(A\) in Exercise Group 3.6.9.1–6, find (1) the eigenvalues, \(\lambda_1\) and \(\lambda_2\text{;}\) (2) for each eigenvalue \(\lambda_1\) and \(\lambda_2\text{,}\) find an eigenvector \(\mathbf v_1\) and \(\mathbf v_2\text{,}\) respectively; and (3) construct the matrix \(T = (\mathbf v_1, \mathbf v_2)\) and calculate \(T^{-1}AT\text{.}\)
1.
\(\displaystyle A = \begin{pmatrix} 7 & 3 \\ -4 & 0 \end{pmatrix}\)
2.
\(\displaystyle A = \begin{pmatrix} -3 & -10 \\ 3 & 8 \end{pmatrix}\)
3.
\(\displaystyle A = \begin{pmatrix} 18 & 11 \\ -22 & -15 \end{pmatrix}\)
4.
\(\displaystyle A = \begin{pmatrix} -14 & -12 \\ 18 & 16 \end{pmatrix}\)
5.
\(\displaystyle A = \begin{pmatrix} 35/3 & 22 \\ -22/3 & -14 \end{pmatrix}\)
6.
\(\displaystyle A = \begin{pmatrix} 31/2 & 85/6 \\ -17 & -47/3 \end{pmatrix}\)

Matrices with Complex Eigenvalues.

For each of the matrices \(A\) in Exercise Group 3.6.9.7–12, find (1) an eigenvalue, \(\lambda\text{;}\) (2) find an eigenvector \(\mathbf v = \mathbf v_{\text{Re}} + i \mathbf v_{\text{Im}}\) for \(\lambda\text{;}\) and (2) construct the matrix \(T = (\mathbf v_{\text{Re}} , \mathbf v_{\text{Im}})\) and calculate \(T^{-1}AT\text{.}\) Compare your result to \(\lambda\text{.}\)
7.
\(\displaystyle A = \begin{pmatrix} 5 & 2 \\ -5 & -1 \end{pmatrix}\)
8.
\(\displaystyle A = \begin{pmatrix} 13 & 4 \\ -26 & -7 \end{pmatrix}\)
9.
\(\displaystyle A = \begin{pmatrix} -2 & -2 \\ 25 & 12 \end{pmatrix}\)
10.
\(\displaystyle A = \begin{pmatrix} -23/3 & -5 \\ 13 & 25/3 \end{pmatrix}\)
11.
\(\displaystyle A = \begin{pmatrix} 2 & 2 \\ -4 & 6 \end{pmatrix}\)
12.
\(\displaystyle A = \begin{pmatrix} -9 & 26 \\ -4 & 11 \end{pmatrix}\)

Matrices with Repeated Eigenvalues.

For each of the matrices \(A\) in Exercise Group 3.6.9.13–18, find (1) the eigenvalue, \(\lambda\) and an eigenvector \(\mathbf v\) for \(\lambda\text{;}\) (2) choose a vector \(\mathbf w\) that is linearly independent of \(\mathbf v\) and compute \((A - \lambda I)\mathbf w\text{.}\) You should find that
\begin{equation*} (A - \lambda I)\mathbf w = \alpha \mathbf v \end{equation*}
for some real number \(\alpha\text{.}\) (3) Let \(\mathbf u = (1/\alpha) \mathbf w\) and form the matrix \(T = (\mathbf v, \mathbf u)\text{.}\) (4) Calculate \(T^{-1}AT\text{,}\) which should be in the form
\begin{equation*} \begin{pmatrix} \lambda & 1 \\ 0 & \lambda \end{pmatrix}. \end{equation*}
13.
\(\displaystyle A = \begin{pmatrix} 7 & 4 \\ -9 & -5 \end{pmatrix}\)
14.
\(\displaystyle A = \begin{pmatrix} 4 & 4 \\ -9 & -8 \end{pmatrix}\)
15.
\(\displaystyle A = \begin{pmatrix} 4 & -4 \\ 1 & 8 \end{pmatrix}\)
16.
\(\displaystyle A = \begin{pmatrix} 6 & 1 \\ -4 & 2 \end{pmatrix}\)
17.
\(\displaystyle A = \begin{pmatrix} 3 & 1 \\ -2 & 0 \end{pmatrix}\)
18.
\(\displaystyle A = \begin{pmatrix} 1 & -2 \\ 1 & 3 \end{pmatrix}\)
You have attempted of activities on this page.