Get startedGet started for free

Some More on Eigenvalues and Eigenvectors

1. Some More on Eigenvalues and Eigenvectors

In this lecture, we discuss a bit more what eigenvalues and eigenvectors are doing, with a brief preview of what's next for chapter four! This lecture gets a bit technical, so if you'd like to skip ahead, that's fine.

2. What's Going On?

A big result in linear algebra is the fact that, if the eigenvalues of a matrix are all distinct, then their associated eigenvectors form a basis for the collection of all vectors of the same size! In other words, if we assume that the matrix A has a basis of eigenvectors v1, v2, all the way to vn, with associated, distinct eigenvalues lambda 1, lambda 2, all the way to lambda n, then every n-dimensional vector can be expressed as a linear combination of these vectors.

3. What's Going On?

Applying the matrix A to x, and using the fact that A times v equals lambda times v, notice the following simple decomposition. Hence, eigenpairs turn matrix multiplication into a linear combination of scalar multiplication! This is one of the essential points of eigenanalyses - eigenvectors create "axes" along which matrix multiplication simply weighs vectors according to the eigenvalue associated to each eigenvector and "how close" the original vector is to each eigenvector.

4. Iterating the Matrix

Many models iteratively apply a matrix to a vector to evolve a system. This is extremely cumbersome analytically (and computationally). However, if we have the eigenvector decomposition, it's as simple as a linear combination of the exponentiation of the eigenvalues times the eigenvectors. When one eigenvalue is dominant, exponentiation of this dynamic will only grow this difference, meaning that the largest eigenvalue (and its eigenvector) is really the place to look to see what happens as t gets bigger.

5. Example with Allele Frequencies

An example of a Markov Matrix M is the matrix here. The i,jth element of M is the probability of transition from state i to state j in one time step. For allele frequencies, states are the probability that a gene takes on a particular allele. Thus, the 2, 1 element of M is the probability that the allele mutates from allele 1 to allele 2. The lack of a mutation are represented by the diagonal terms of the matrix.

6. Example with Allele Frequencies

Notice that, after using the eigen() command on the matrix M, the biggest eigenvalue of M is 1, while the other eigenvalues are smaller than 1.

7. Example with Allele Frequencies

Since numbers that are less than one decrease when exponentiated, eventually iterating by this matrix (simulating numerous gene mutations) will resemble the first eigenvector of M. This first eigenvector, called using the [,1] index, when normalized to sum to one, by dividing by the sum of the vector, appears to indicate that the allele frequencies end up equal as time evolves, all equal to one fourth. This leading eigenvector is called the stationary distribution of the Markov model. These analyses are the building blocks for what's known as the Hardy-Weinberg Principle in genetics.

8. Let's practice

Let's explore some of these ideas!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.