Chapter 4, which we completed before Spring Break, was the central chapter of this course for bringing together all aspects of the underlying theory. Chapter 5, which will ocupy most of the next three weeks, presents the most important applications of that theory.
We start with another look at the Markov Chain lab. In particular, we saw in the lab that, if M is a positive Markov transition matrix, then there is a special nonzero vector p such that Mp = p, i.e., a steady state vector. The vector p is also our first example of an eigenvector, and it corresponds to the eigenvalue 1.
On several occasions we have danced around the idea of "special directions" -- vectors in the domain of a linear transformation T on which T acts in particularly simple ways. For example, we can describe very simply what T does to every vector x in its kernel: T(x) = 0. Any nonzero vector in the kernel is an eigenvector corresponding to the eigenvalue 0. Similarly, the equation Mp = p tells us that multiplication by M does something very simple to any vector in the direction p: it leaves such vectors unchanged.
In general, a nonzero vector x is an eigenvector for an n by n matrix M if Mx = cx for some scalar c. The scalar factor c is the corresponding eigenvalue. Our goal as we seek eigenvalues and eigenvectors is to find a basis for Rn that consists entirely of eigenvectors. This goal is not always achievable, but when it is, we can transform M into a "similar" matrix P-1MP that is diagonal. This process is called diagonalization, and it will play a central role in this week's lab.
The adjective "eigen" is a German word that corresponds to our word "characteristic" -- which will also be used frequently. The special charm of the German word is that it is much shorter than the English word. And, by the way, the characteristic roots you encountered in the Difference Equations lab are also eigenvalues, as we will see.
To see the syllabus for Week 10 in a separate window, click here.
Last modified: March 16, 1999