Math 104.01 (Linear Algebra)

Spring 1999

Plan for Week 5

We begin this week by consolidating what you learned in lab (and eslewhere) about invertible matrices. In one sense, this is the culmination of whatever you learned in high school about systems of linear equations. In all likelihood, you concentrated almost exclusively on (small) systems with the same number of equations as unknowns -- and most (perhaps all?) of the time you found unique solutions. This is an important situation in its own right, but it is also a backdrop against which we will later explore the importance of systems that do not have this property. Each condition that is equivalent to invertibility also tells us something about non-invertibility or singularity.

At midweek we backtrack to Chapter 1 and introduce the "functions" that play a central role throughout the semester: linear transformations. Just as calculus may be described as the study of properties of differentiable and integrable functions, this course may be described as the study of linear transformations. These really are functions in the usual sense, if we allow (as in Calculus III) vectors as elements of both domain and range. These functions are linear in the sense that all the formulas giving values of the functions are linear. If this seems too simple after studying nonlinear functions in calculus, keep in mind that we will often be dealing with more variables than ever appeared in calculus.

We will find that, for every linear transformation T, there is a matrix A such that T(x) = Ax for all x in the domain of T. Thus, every linear transformation can be evaluated at a vector x by multiplication by the matrix A. This casts our study of systems of equations Ax = b in a new light. For example, the question of whether the system is consistent is the same as asking whether b is in the range of T.

The connection between matrices and linear transformations leads to a natural interpretation of matrix multiplication: composition of the corresponding functions. That is, if A and B are the matrices of transformations T and S, respectively, then the product AB is the matrix of the transformation x --> T(S(x)). This coincides with our earlier approach of treating matrix multiplication as an extension of the idea of multiplying a matrix by a vector.

Matrices of appropriate sizes can multiplied -- and the resulting products can (sometimes) be factored. For example, we saw in the Week 4 lab that we could express an inverse matrix as a product of elementary matrices. That is, we could factor the inverse into elementary factors. In this week's lab, we build on the same idea to factor a matrix A as the product of a  lower triangular matrix L and an upper triangular matrix U. Roughly speaking, U is an echelon form (not reduced) of A, and L is the product of the elementary matrices used in the reduction. The LU decomposition is often used to construct efficient computational techniques for solving large systems of equations.

To see the syllabus for Week 5 in a separate window, click here.


Notes:
  1. Your next homework papers will be turned in on Monday, Feb. 15. Those papers should include solutions to all problems in the assignment below. The assignment dates are start dates.
  2. Your textbook has answers in the back of the book for odd-numbered exercises. If you look there first, you will subvert the learning process. If you know your answers are correct, you will never need to look there. In general, no solution will be given full credit unless you have written an explanation of why you know it is correct. (Exceptions to this rule are the exercises whose numbers appear in parentheses.) For example, an acceptable explanation for a solution of a linear system of equations is that you have substituted the proposed solutions into the original equations and found that they satisfy all of the equations simultaneously -- and you have to show your work. Two examples of unacceptable explanations:
  3. Some exercises ask explicitly for an explanation -- in this case, you are not being asked to do more than the exercise calls for.
  4. Our syllabus does not include Sections 2.6 (iterative methods), 2.7 (an economic model), or 2.8 (computer graphics). However, you may want to read some or all of this material for glimpses of another practical approach (in addition to LU factorization) to solving linear systems and of important applications. We are also skipping 2.9 (subspaces), but the important ideas there will come up in another context in Chapter 4.

Assignments


David A. Smith <das@math.duke.edu>

Last modified: January 23, 1999