Inverting a Matrix
Given that A is invertible, we can calculate A-1 as follows:
( A − 1 ) i j = ( − 1 ) i + j ∣ A j i ∣ ∣ A ∣ (A^{-1})_{ij} = (-1)^{i+j} \frac{|A_{ji|}}{|A|} ( A − 1 ) ij = ( − 1 ) i + j ∣ A ∣ ∣ A ji ∣
Aji is the matrix we get by deleting the jth row and the ith column.
This is more of a conceptual method to invert a matrix, but less efficient than inverting matrices through row reduction
Example
A = ( 3 6 7 0 2 1 2 3 4 ) A − 1 = 1 ∣ A ∣ ∗ ( ( + ( 2 1 3 4 ) − ( 0 1 2 4 ) + ( 0 2 2 3 ) − ( 6 7 3 4 ) + ( 3 7 2 4 ) − ( 3 6 2 3 ) + ( 6 7 2 1 ) − ( 3 7 0 1 ) + ( 3 6 0 2 ) ) ) A = \begin{pmatrix}
3 & 6 & 7 \\
0 & 2 & 1 \\
2 & 3 & 4
\end{pmatrix} \\
A^{-1} = \frac{1}{|A|} *
(
\begin{pmatrix}
+ \begin{pmatrix} 2 & 1 \\ 3 & 4 \end{pmatrix} & - \begin{pmatrix} 0 & 1 \\ 2 & 4 \end{pmatrix} & + \begin{pmatrix} 0 & 2 \\ 2 & 3 \end{pmatrix} \\
- \begin{pmatrix} 6 & 7 \\ 3 & 4 \end{pmatrix} & + \begin{pmatrix} 3 & 7 \\ 2 & 4 \end{pmatrix} & - \begin{pmatrix} 3 & 6 \\ 2 & 3 \end{pmatrix} \\
+ \begin{pmatrix} 6 & 7 \\ 2 & 1 \end{pmatrix} & - \begin{pmatrix} 3 & 7 \\ 0 & 1 \end{pmatrix} & + \begin{pmatrix} 3 & 6 \\ 0 & 2 \end{pmatrix} \\
\end{pmatrix}
) A = ⎝ ⎛ 3 0 2 6 2 3 7 1 4 ⎠ ⎞ A − 1 = ∣ A ∣ 1 ∗ ( ⎝ ⎛ + ( 2 3 1 4 ) − ( 6 3 7 4 ) + ( 6 2 7 1 ) − ( 0 2 1 4 ) + ( 3 2 7 4 ) − ( 3 0 7 1 ) + ( 0 2 2 3 ) − ( 3 2 6 3 ) + ( 3 0 6 2 ) ⎠ ⎞ )
Eigenvalues and Eigenvectors
Consider the matrix A = ( 1 6 5 2 ) , v → = ( 1 1 ) ( 1 6 5 2 ) ( 1 1 ) = ( 7 7 ) = 7 ∗ ( 1 1 ) \text{Consider the matrix } A = \begin{pmatrix}
1 & 6 \\
5 & 2
\end{pmatrix}, \overrightarrow{v} = \begin{pmatrix} 1 \\ 1 \end{pmatrix} \\
\begin{pmatrix}
1 & 6 \\
5 & 2
\end{pmatrix}
\begin{pmatrix} 1 \\ 1 \end{pmatrix}
= \begin{pmatrix} 7 \\ 7 \end{pmatrix} = 7 * \begin{pmatrix} 1 \\ 1 \end{pmatrix} Consider the matrix A = ( 1 5 6 2 ) , v = ( 1 1 ) ( 1 5 6 2 ) ( 1 1 ) = ( 7 7 ) = 7 ∗ ( 1 1 )
In the above example, the eigenvalue is 7 and the eigenvector is v.
If A v → = λ v → for λ ∈ R and v → ≠ 0 → 1. λ is an eigenvalue of A 2. v → is an eigenvector of A \text{If } A\overrightarrow{v} = \lambda \overrightarrow{v} \text{ for } \lambda \in \mathbb{R} \text{ and } \overrightarrow{v} \neq \overrightarrow{0} \\
\text{1. } \lambda \text{ is an eigenvalue of A} \\
\text{2. } \overrightarrow{v} \text{ is an eigenvector of A} If A v = λ v for λ ∈ R and v = 0 1. λ is an eigenvalue of A 2. v is an eigenvector of A
Examples
A = ( 7 − 3 10 − 4 ) , v → = ( 3 5 ) A v → = ( 6 10 ) = 2 ∗ ( 3 5 ) A = \begin{pmatrix} 7 & -3 \\ 10 & -4 \end{pmatrix}, \overrightarrow{v} = \begin{pmatrix} 3 \\ 5 \end{pmatrix} \\
A \overrightarrow{v} = \begin{pmatrix} 6 \\ 10 \end{pmatrix} = 2 * \begin{pmatrix} 3 \\ 5 \end{pmatrix} A = ( 7 10 − 3 − 4 ) , v = ( 3 5 ) A v = ( 6 10 ) = 2 ∗ ( 3 5 )
A = ( 1 1 1 1 ) , v 2 → = ( 1 1 ) , v 0 → = ( 1 − 1 ) A v 2 → = ( 2 2 ) = 2 ∗ v 2 → A v 0 → = ( 0 0 ) = 0 ∗ v 0 → A = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}, \overrightarrow{v_2} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \overrightarrow{v_0} = \begin{pmatrix} 1 \\ -1 \end{pmatrix} \\
A \overrightarrow{v_2} = \begin{pmatrix} 2 \\ 2 \end{pmatrix} = 2 * \overrightarrow{v_2} \\
A \overrightarrow{v_0} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} = 0 * \overrightarrow{v_0} A = ( 1 1 1 1 ) , v 2 = ( 1 1 ) , v 0 = ( 1 − 1 ) A v 2 = ( 2 2 ) = 2 ∗ v 2 A v 0 = ( 0 0 ) = 0 ∗ v 0
Note that each eigenvectors has ONE eigenvalue, but eigenvalues can have MULTIPLE eigenvectors. Each eigenvalue has a “family” of eigenvectors that can be scaled up or down to get the same eigenvalue.
Finding eigenvalues
Suppose that A v → = λ v → for λ ∈ R and v → ≠ 0 → . Then, the following chain of implications holds: A v → = λ v → ⟺ λ v → − A v → = 0 → = λ I n v → − A v → = 0 → ⟺ ( λ I n − A ) v → = 0 → ⟺ v → ∈ N u l ( λ I n − A ) = { 0 → } ⟺ ( λ I n − A ) is non invertible ⟺ d e t ( λ I n − A ) = 0 ⟺ λ is a root of d e t ( λ I n − A ) Note: d e t ( λ I n − A ) can be represented as a nth degree polynomial P A ( t ) \text{Suppose that } A\overrightarrow{v} = \lambda \overrightarrow{v} \text{ for } \lambda \in \mathbb{R} \text{ and } \overrightarrow{v} \neq \overrightarrow{0} \text{. Then, the following chain of implications holds: } \\
A\overrightarrow{v} = \lambda \overrightarrow{v} \iff \lambda \overrightarrow{v} - A\overrightarrow{v} = \overrightarrow{0} = \lambda I_n \overrightarrow{v} - A\overrightarrow{v} = \overrightarrow{0} \\
\iff (\lambda I_n - A) \overrightarrow{v} = \overrightarrow{0} \iff \overrightarrow{v} \in Nul(\lambda I_n - A) = \{\overrightarrow{0}\} \\
\iff (\lambda I_n - A) \text{ is non invertible } \iff det(\lambda I_n - A) = 0 \iff \lambda \text{ is a root of } det(\lambda I_n - A) \\
\text{Note: } det(\lambda I_n - A) \text{ can be represented as a nth degree polynomial } P_A (t) Suppose that A v = λ v for λ ∈ R and v = 0 . Then, the following chain of implications holds: A v = λ v ⟺ λ v − A v = 0 = λ I n v − A v = 0 ⟺ ( λ I n − A ) v = 0 ⟺ v ∈ N u l ( λ I n − A ) = { 0 } ⟺ ( λ I n − A ) is non invertible ⟺ d e t ( λ I n − A ) = 0 ⟺ λ is a root of d e t ( λ I n − A ) Note: d e t ( λ I n − A ) can be represented as a nth degree polynomial P A ( t )