Skip to main content

Chapter R Representations

Previous work with linear transformations may have convinced you that we can convert most questions about linear transformations into questions about systems of equations or properties of subspaces of \(\complex{m}\text{.}\) In this section we begin to make these vague notions precise. We have used the word “representation” prior, but it will get a heavy workout in this chapter. In many ways, everything we have studied so far was in preparation for this chapter.

Annotated Acronyms R.

Definition VR.

Matrix representations build on vector representations, so this is the definition that gets us started. A representation depends on the choice of a single basis for the vector space. Theorem VRRB is what tells us this idea might be useful.

Theorem VRILT.

As an invertible linear transformation, vector representation allows us to translate, back and forth, between abstract vector spaces (\(V\)) and concrete vector spaces (\(\complex{n}\)). This is key to all our notions of representations in this chapter.

Theorem CFDVS.

Every vector space with finite dimension “looks like” a vector space of column vectors. Vector representation is the isomorphism that establishes that these vector spaces are isomorphic.

Definition MR.

Building on the definition of a vector representation, we define a representation of a linear transformation, determined by a choice of two bases, one for the domain and one for the codomain. Notice that vectors are represented by columnar lists of scalars, while linear transformations are represented by rectangular tables of scalars. Building a matrix representation is as important a skill as row-reducing a matrix.

Theorem FTMR.

Definition MR is not really very interesting until we have this theorem. The second form tells us that we can compute outputs of linear transformations via matrix multiplication, along with some bookkeeping for vector representations. Searching forward through the text on “FTMR” is an interesting exercise. You will find reference to this result buried inside many key proofs at critical points, and it also appears in numerous examples and solutions to exercises.

Theorem MRCLT.

Turns out that matrix multiplication is really a very natural operation, it is just the chaining together (composition) of functions (linear transformations). Beautiful. Even if you do not try to work the problem, study Solution T80.1 for more insight.

Theorem KNSI.

Kernels “are” null spaces. For this reason you will see these terms used interchangeably.

Theorem RCSI.

Ranges “are” column spaces. For this reason you will see these terms used interchangeably.

Theorem IMR.

Invertible linear transformations are represented by invertible (nonsingular) matrices.

Theorem NME9.

The NMEx series has always been important, but we have held off saying so until now. This is the end of the line for this one, so it is a good time to contemplate all that it means.

Theorem SCB.

Diagonalization back in Section SD was really a change of basis to achieve a diagonal matrix repesentation. Maybe we should be highlighting the more general Theorem MRCB here, but its overly technical description just is not as appealing. However, it will be important in some of the matrix decompositions you will encounter in a future course in linear algebra.

Theorem EER.

This theorem, with the companion definition, Definition EELT, tells us that eigenvalues, and eigenvectors, are fundamentally a characteristic of linear transformations (not matrices). If you study matrix decompositions in a future course in linear algebra you will come to appreciate that almost all of a matrix's secrets can be unlocked with knowledge of the eigenvalues and eigenvectors.

Theorem OD.

Can you imagine anything nicer than an orthonormal diagonalization? A basis of pairwise orthogonal, unit norm, eigenvectors that provide a diagonal representation for a matrix? Here we learn just when this can happen — precisely when a matrix is normal, which is a disarmingly simple property to define.