In: Math
Application and Limitation of:
QR decomposition
Schur decomposition
QR decomposition: This factorization is the orthonomalization of the colThis is probably the most important decomposition. The matrix is decomposed into QΛQ−1, where Q is a matrix of the "eigenvectors" (as columns), and Λ is a diagonal matrix of the "eigenvalues". The elements of Λ are roots to the characteristic polynomial of the matrix:det(λI−M). The reason that this guy is so important is that it breaks the matrix down into acting more like a scalar. That is, for every eigenvector qq, there's some eigenvalue λ such that Mq=λq. Moreover, the eigenvectors span the space, so you can express ANY vector as a sum of eigenvectors, distribute the matrix through the sum, use the identity above to scale each of the components by the corresponding eigenvalue, and then recombine the components. This can result in much faster iterated multiplication, and can make a nice closed form for certain sequences and iterated operations. (EG, the Fibonacci sequence's closed form, aka Binet's Formula, can be found this way.) Although this is all very nice, the eigendecomposition only can be done if the matrix is diagonalizable and square. However, the "generalized eigendecomposition" can solve the diagonalizable problem at least. It represents Λ instead as a "block diagonal" matrix, where there are rules about what the blocks in Λ can be. (Specifically, the blocks are either 1x1 matricies or bidiagonal matricies with only ones on the superdiagonal and the same eigenvalue along the diagonal. I won't go into detail about why this is, but it's interesting to look into) In general, this is a hard thing to calculate. There are, once again, several ways to do it, and it's all about what you need and what you have. For matrices larger than 5x5's, you almost always have to use numerical methods, as there isn't generally a radical closed form for the eigenvalues. An example a relatively simple numerical method is the QR algorithm mentioned in the "QR decomposition" section. Another method (generally taught as the way to go in undergrad) is to calculate the characteristic polynomial, extract it's roots, and then use those roots to find the corresponding eigenvalues via some kind of kernel finding algorithmumns of the matrix. Q is a unitary matrix which are the orthonormalized columns, and R is a right (upper) triangular matrix that expresses how each of the columns of Q can be recombined to find the original columns of the matrix. This is incredibly useful, and is the core piece in the "QR Algorithm," as well as useful in finding the pseudo-inverse of a matrix and finding the Singular Value Decomposition. A variation is the RRQR (rank reveling QR) algorithm, which includes a column pivot and can be used to find the nullity, rank, and the orthonormilization of the kernel and image of a matrix. Once again, QR factorization has many algorithms, and it, again, depends on what you are doing. (For instance, I would rather use the Gram-Schmidt process if I had to solve this paper and pencil, but Given's rotations are much better on a parallel processing computer).
Schur decomposition
The Schur decomposition reads as follows: if A is a n × n square matrix with complex entries, then A can be expressed as{\displaystyle A=QUQ^{-1}} where Q is a unitary matrix (so that its inverse Q−1 is also the conjugate transpose Q* of Q), and U is an upper triangular matrix, which is called a Schur form of A. Since U is similar to A, it has the same spectrum, and since it is triangular, its eigenvalues are the diagonal entries of U. The Schur decomposition implies that there exists a nested sequence of A-invariant subspaces {0} = V0 ⊂ V1 ⊂ ... ⊂ Vn = Cn, and that there exists an ordered orthonormal basis (for the standard Hermitian form of Cn) such that the first i basis vectors span Vi for each i occurring in the nested sequence. Phrased somewhat differently, the first part says that a linear operator J on a complex finite-dimensional vector space stabilizes a complete flag (V1,...,Vn).
Every invertible operator is contained in a Borel group. Every operator fixes a point of the flag manifold.