Others view: Poles and Zeros ♦ Nussinov algorithm ♦ Lissajous curves ♦ POSTNET ♦ Welcome ♦ rss.xml ♦ Reversal potential ♦ Ray diagram ♦ 2-3-4 tree ♦ Logic gate ♦

**Matrix** (in mathematics) is a rectangular array of values, like

[[Math:c|M= \begin{matrix} 1 \amp 2 \amp 2 \\ 4 \amp 5 \amp 6 \end{matrix} ]]

Matrices play the key role in linear algebra and are also important in many other areas of mathematics. Element of matrix is usually denoted by the two subscripts, the row (first) and the column(second). For the matrix above,

[[Math:c|M_{2,1} = 4 ]]

This may be contra intuitive as in usual rectangular coordinate system the first coordinate (x) is usually horizontal and in notations goes before the second coordinate (y). The matrix that only has one column (and sometimes also matrix that only has one row) is called a vector.

**Matrix transposition** means exchanging rows with columns and is marked by the superscript T:

[[Math:c| \begin{bmatrix} 4 \amp 5 \\ 2 \amp -3 \end{bmatrix}^T = \begin{bmatrix} 4 \amp 2 \\ 5 \amp -3 \\ \end{bmatrix} ]]

**Matrix addition**^{[1]} is relatively simple: values that are at the same location (row and column) in the two matrices are added, producing the result matrix of the same size. On matrices that have the same number of rows and also the same number of columns can be added. Matrix subtraction is similar to addition, just sum is replaced by the difference.

**Matrix multiplication by scalar** (single numeric value) produces a matrix of the same dimensions, where every element of the initial matrix is multiplied by the value of this scalar.

**Matrix multiplication**^{[2]} is more complex operation and is **not** simply a result of multiplication of values at the same positions. Multiplication of matrices **A** and **B** (producing the matrix **C**) is performed following the formula

- [[Math:c| \left\mathbf{C}\right_{i,j} = A_{i,1}B_{1,j} \plus A_{i,2}B_{2,j} \plus \cdots \plus A_{i,n}B_{n,j} = \sum_{r=1}^n A_{i,r}B_{r,j},]]

From this formula is obvious that the number of rows in the first matrix must be equal to the number of columns in the second matrix, and the number of columns in the first matrix must be equal to the number of rows in the second matrix. Vector can be multiplied by matrix that only has one row of the same length as the number of rows in the vector.

Differently from the usual multiplication of the two simple (scalar) values, the order in matrix multiplication is important when A and B are matrices,

- [[Math:c|A B \neq B A]]

**Matrix inverse** can be defined using concept of matrix multiplication. It is said that the matrix **B** is the inversion of the matrix **A** if

- [[Math:c|\mathbf{AB} = \mathbf{I} \ ]]

where **I** is the *identity matrix*. Identity matrix contains ones in the diagonal and zeros everywhere else, for instance

- [[Math:c|\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \end{bmatrix} ]]

Inversion can only be computed for a square matrix, and in some cases it does not exist for a square matrix as well. Same as for scalar, for matrices is also true that

- [[Math:c|\mathbf{AB} = \mathbf{BA} (= \mathbf{I}) \ ]]

**Matrix determinant** is a scalar value that can be computed for a square matrix. It is relatively complex to compute and can only be easily written down for the small matrices, for instance

- [[Math:c| A = \begin{bmatrix} a \amp b\\c \amp d \end{bmatrix}\, ]]

has determinant

- [[Math:c|\det A = ad - bc.\ ]]

For the bigger matrices, the determinant can be defined through recursion, computing it from formula that contains determinants of the parts of this matrix. These determinants of the smaller matrices can be computed with the same formula having parts of even smaller matrices, until at the end we only need to find determinants of the 2 x 2 matrices that is easy to do. This operation, described at ^{[3]} looks like:

[[Math:c| det \begin{bmatrix} a \amp b \amp c\\ d \amp e \amp f \\ g \amp h \amp i \end{bmatrix}\ = a \det \begin{bmatrix} e \amp f\\h \amp i \end{bmatrix}\ - b \det \begin{bmatrix} d \amp f\\g \amp i \end{bmatrix}\ \plus c \det \begin{bmatrix} d \amp e\\g \amp h \end{bmatrix}\ ]]

Hence the determinant is can be found from values on the top row, multiplied but determinants of smaller matrices that are missing the top row and also the column, having the position of that value. Every second expression is then multiplied by -1 and the sum is computed.

The matrix is only invertable if its determinant is not zero. Determinant is important in solving linear systems of equations. Surprisingly, the code to compute the determinant is not very complex.

**Matrix adjoint** is similar to the matrix inverse but, unlike inverse, adjoint can be computed for any square matrix and without the need to divide. It is relatively complex to define and compute, please use source code and ^{[4]} for reference.

One of the frequent uses of matrix is to abbreviate a system of linear equations

- [[Math:c|\begin{alignat}{7} a_{11} x_1 \amp\amp\; \plus \;\amp\amp a_{12} x_2 \amp\amp\; \plus \cdots \plus \;\amp\amp a_{1n} x_n \amp\amp\; = \;\amp\amp\amp b_1 \\ a_{21} x_1 \amp\amp\; \plus \;\amp\amp a_{22} x_2 \amp\amp\; \plus \cdots \plus \;\amp\amp a_{2n} x_n \amp\amp\; = \;\amp\amp\amp b_2 \\ \vdots\;\;\; \amp\amp \amp\amp \vdots\;\;\; \amp\amp \amp\amp \vdots\;\;\; \amp\amp \amp\amp\amp \;\vdots \\ a_{m1} x_1 \amp\amp\; \plus \;\amp\amp a_{m2} x_2 \amp\amp\; \plus \cdots \plus \;\amp\amp a_{mn} x_n \amp\amp\; = \;\amp\amp\amp b_m. \\ \end{alignat}]]

Here [[Math:c|x_1,\ x_2,...,x_n]] are the unknowns, [[Math:c|a_{11},\ a_{12},...,\ a_{mn}]] are the coefficients and [[Math:c|b_1,\ b_2,...,b_m]] are the constant terms.

as

- [[Math:c|A\bold{x}=\bold{b}]]

where

- [[Math:c| A= \begin{bmatrix} a_{11} \amp a_{12} \amp \cdots \amp a_{1n} \\ a_{21} \amp a_{22} \amp \cdots \amp a_{2n} \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ a_{m1} \amp a_{m2} \amp \cdots \amp a_{mn} \end{bmatrix},\quad \bold{x}= \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix},\quad \bold{b}= \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_m \end{bmatrix} ]]

^{1 }Rules of Matrix arithmetic^{2 }Matrix multiplication^{3 }How to compute determinants^{4 }Wikipedia about matrix adjoint