We begin this week by consolidating what you learned in lab (and eslewhere) about invertible matrices. In one sense, this is the culmination of whatever you learned in high school about systems of linear equations. In all likelihood, you concentrated almost exclusively on (small) systems with the same number of equations as unknowns -- and most (perhaps all?) of the time you found unique solutions. This is an important situation in its own right, but it is also a backdrop against which we will later explore the importance of systems that do not have this property. Each condition that is equivalent to invertibility also tells us something about non-invertibility or singularity.
At midweek we backtrack to Chapter 1 and introduce the "functions" that play a central role throughout the semester: linear transformations. Just as calculus may be described as the study of properties of differentiable and integrable functions, this course may be described as the study of linear transformations. These really are functions in the usual sense, if we allow (as in Calculus III) vectors as elements of both domain and range. These functions are linear in the sense that all the formulas giving values of the functions are linear. If this seems too simple after studying nonlinear functions in calculus, keep in mind that we will often be dealing with more variables than ever appeared in calculus.
We will find that, for every linear transformation T, there is a matrix A such that T(x) = Ax for all x in the domain of T. Thus, every linear transformation can be evaluated at a vector x by multiplication by the matrix A. This casts our study of systems of equations Ax = b in a new light. For example, the question of whether the system is consistent is the same as asking whether b is in the range of T.
The connection between matrices and linear transformations leads to a natural interpretation of matrix multiplication: composition of the corresponding functions. That is, if A and B are the matrices of transformations T and S, respectively, then the product AB is the matrix of the transformation x --> T(S(x)). This coincides with our earlier approach of treating matrix multiplication as an extension of the idea of multiplying a matrix by a vector.
Matrices of appropriate sizes can multiplied -- and the resulting products can (sometimes) be factored. For example, we saw in the Week 4 lab that we could express an inverse matrix as a product of elementary matrices. That is, we could factor the inverse into elementary factors. In this week's lab, we build on the same idea to factor a matrix A as the product of a lower triangular matrix L and an upper triangular matrix U. Roughly speaking, U is an echelon form (not reduced) of A, and L is the product of the elementary matrices used in the reduction. The LU decomposition is often used to construct efficient computational techniques for solving large systems of equations.
To see the syllabus for Week 5 in a separate window, click here.
Last modified: January 23, 1999