4.1: Determinants- Definition

\(\usepackage \newcommand<\lt> \newcommand<\gt><>> \newcommand \)
Objectives
  1. Learn the definition of the determinant.
  2. Learn some ways to eyeball a matrix with zero determinant, and how to compute determinants of upper- and lower-triangular matrices.
  3. Learn the basic properties of the determinant, and how to apply them.
  4. Recipe: compute the determinant using row and column operations.
  5. Theorems: existence theorem, invertibility property, multiplicativity property, transpose property.
  6. Vocabulary words: , , , .
  7. Essential vocabulary word: .

In this section, we define the determinant, and we present one way to compute it. Then we discuss some of the many wonderful properties the determinant enjoys.

The Definition of the Determinant

The determinant of a square matrix \(A\) is a real number \(\det(A)\). It is defined via its behavior with respect to row operations; this means we can use row reduction to compute it. We will give a recursive formula for the determinant in Section 4.2. We will also show in Subsection Magical Properties of the Determinant that the determinant is related to invertibility, and in Section 4.3 that it is related to volumes.

Definition \(\PageIndex\): Determinant

The is a function

satisfying the following properties:

  1. Doing a row replacement on \(A\) does not change \(\det(A)\).
  2. Scaling a row of \(A\) by a scalar \(c\) multiplies the determinant by \(c\).
  3. Swapping two rows of a matrix multiplies the determinant by \(-1\).
  4. The determinant of the identity matrix \(I_n\) is equal to \(1\).

In other words, to every square matrix \(A\) we assign a number \(\det(A)\) in a way that satisfies the above properties.

In each of the first three cases, doing a row operation on a matrix scales the determinant by a nonzero number. (Multiplying a row by zero is not a row operation.) Therefore, doing row operations on a square matrix \(A\) does not change whether or not the determinant is zero.

The main motivation behind using these particular defining properties is geometric: see Section 4.3. Another motivation for this definition is that it tells us how to compute the determinant: we row reduce and keep track of the changes.

Example \(\PageIndex\)

Let us compute \(\det\left(\begin2&1\\1&4\end\right).\) First we row reduce, then we compute the determinant in the opposite order:

The reduced row echelon form of the matrix is the identity matrix \(I_2\text\) so its determinant is \(1\). The second-last step in the row reduction was a row replacement, so the second-final matrix also has determinant \(1\). The previous step in the row reduction was a row scaling by \(-1/7\text\) since (the determinant of the second matrix times \(-1/7\)) is \(1\text\) the determinant of the second matrix must be \(-7\). The first step in the row reduction was a row swap, so the determinant of the first matrix is negative the determinant of the second. Thus, the determinant of the original matrix is \(7\).

Note that our answer agrees with Definition 3.5.2 in Section 3.5 of the determinant.

Example \(\PageIndex\)
Solution

Let \(A=\left(\begin1&0\\0&3\end\right)\). Since \(A\) is obtained from \(I_2\) by multiplying the second row by the constant \(3\text\) we have

\[ \det(A)=3\det(I_2)=3\cdot 1=3. \nonumber \]

Note that our answer agrees with Definition 3.5.2 in Section 3.5 of the determinant.

Example \(\PageIndex\)
Solution

First we row reduce, then we compute the determinant in the opposite order:

The reduced row echelon form is \(I_3\text\) which has determinant \(1\). Working backwards from \(I_3\) and using the four defining properties Definition \(\PageIndex\), we see that the second matrix also has determinant \(1\) (it differs from \(I_3\) by a row replacement), and the first matrix has determinant \(-1\) (it differs from the second by a row swap).

Here is the general method for computing determinants using row reduction.

Recipe: Computing Determinants by Row Reducing

Let \(A\) be a square matrix. Suppose that you do some number of row operations on \(A\) to obtain a matrix \(B\) in row echelon form. Then

where \(r\) is the number of row swaps performed.

In other words, the determinant of \(A\) is the product of diagonal entries of the row echelon form \(B\text\) times a factor of \(\pm1\) coming from the number of row swaps you made, divided by the product of the scaling factors used in the row reduction.

Remark

This is an efficient way of computing the determinant of a large matrix, either by hand or by computer. The computational complexity of row reduction is \(O(n^3)\text\) by contrast, the cofactor expansion algorithm we will learn in Section 4.2 has complexity \(O(n!)\approx O(n^n\sqrt n)\text\) which is much larger. (Cofactor expansion has other uses.)

Example \(\PageIndex\)

We row reduce the matrix, keeping track of the number of row swaps and of the scaling factors used.

We made two row swaps and scaled once by a factor of \(1/2\text\) so the Recipe: Computing Determinants by Row Reducing says that

Example \(\PageIndex\)
Solution

We row reduce the matrix, keeping track of the number of row swaps and of the scaling factors used.

We did not make any row swaps, and we scaled once by a factor of \(-1/5\text\) so the Recipe: Computing Determinants by Row Reducing says that

Example \(\PageIndex\): The Determinant of a \(2\times 2\) Matrix

Let us use the Recipe: Computing Determinants by Row Reducing to compute the determinant of a general \(2\times 2\) matrix \(A = \left(\begina&b\\c&d\end\right)\).

In either case, we recover Definition 3.5.2 in Section 3.5.

If a matrix is already in row echelon form, then you can simply read off the determinant as the product of the diagonal entries. It turns out this is true for a slightly larger class of matrices called triangular.

Definition \(\PageIndex\): Diagonal

clipboard_e5079ab7aead0701ad694476c1adaf67e.png

clipboard_e7129caeeae33e503966a2c5961101d36.png

Proposition \(\PageIndex\)

Let \(A\) be an \(n\times n\) matrix.

  1. If \(A\) has a zero row or column, then \(\det(A) = 0.\)
  2. If \(A\) is upper-triangular or lower-triangular, then \(\det(A)\) is the product of its diagonal entries.
  1. Suppose that \(A\) has a zero row. Let \(B\) be the matrix obtained by negating the zero row. Then \(\det(A) = -\det(B)\) by the second defining property, Definition \(\PageIndex\). But \(A = B\text\) so \(\det(A) = \det(B)\text\)
    \[\left(\begin1&2&3\\0&0&0\\7&8&9\end\right)\quad\xrightarrow\quad\left(\begin1&2&3\\0&0&0\\7&8&9\end\right).\nonumber\]
    Putting these together yields \(\det(A) = -\det(A)\text\) so \(\det(A)=0\).
    Now suppose that \(A\) has a zero column. Then \(A\) is not invertible by Theorem 3.6.1 in Section 3.6, so its reduced row echelon form has a zero row. Since row operations do not change whether the determinant is zero, we conclude \(\det(A)=0\).
  2. First suppose that \(A\) is upper-triangular, and that one of the diagonal entries is zero, say \(a_=0\). We can perform row operations to clear the entries above the nonzero diagonal entries:
    \[\left(\begina_&\star&\star&\star \\ 0&a_&\star&\star \\ 0&0&0&\star \\ 0&0&0&a_\end\right)\xrightarrow<\qquad>\left(\begina_&0&\star &0\\0&a_&\star&0\\0&0&0&0\\0&0&0&a_\end\right)\nonumber\]
    In the resulting matrix, the \(i\)th row is zero, so \(\det(A) = 0\) by the first part.
    Still assuming that \(A\) is upper-triangular, now suppose that all of the diagonal entries of \(A\) are nonzero. Then \(A\) can be transformed to the identity matrix by scaling the diagonal entries and then doing row replacements:
    \[\begin<\left(\begina&\star&\star \\ 0&b&\star \\ 0&0&c\end\right)>&<\xrightarrow<\begin>\\,\:b^,\:c^>\end>>&<\left(\begin1&\star&\star \\ 0&1&\star \\ 0&0&1\end\right)>&<\xrightarrow<\begin>\\>\end>>&<\left(\begin1&0&0\\0&1&0\\0&0&1\end\right)>\\&<\xleftarrow<\qquad>>&&<\xleftarrow<\qquad>>&\end\nonumber\]
    Since \(\det(I_n) = 1\) and we scaled by the reciprocals of the diagonal entries, this implies \(\det(A)\) is the product of the diagonal entries.
    The same argument works for lower triangular matrices, except that the the row replacements go down instead of up.
Example \(\PageIndex\)

Compute the determinants of these matrices:

Solution

The first matrix is upper-triangular, the second is lower-triangular, and the third has a zero row:

A matrix can always be transformed into row echelon form by a series of row operations, and a matrix in row echelon form is upper-triangular. Therefore, we have completely justified Recipe: Computing Determinants by Row Reducing for computing the determinant.

The determinant is characterized by its defining properties, Definition \(\PageIndex\), since we can compute the determinant of any matrix using row reduction, as in the above Recipe: Computing Determinants by Row Reducing. However, we have not yet proved the existence of a function satisfying the defining properties! Row reducing will compute the determinant if it exists, but we cannot use row reduction to prove existence, because we do not yet know that you compute the same number by row reducing in two different ways.

Theorem \(\PageIndex\): Existence of the Determinant

There exists one and only one function from the set of square matrices to the real numbers, that satisfies the four defining properties, Definition \(\PageIndex\).

We will prove the existence theorem in Section 4.2, by exhibiting a recursive formula for the determinant. Again, the real content of the existence theorem is:

Note \(\PageIndex\)

No matter which row operations you do, you will always compute the same value for the determinant.

Magical Properties of the Determinant

In this subsection, we will discuss a number of the amazing properties enjoyed by the determinant: the invertibility property, Proposition \(\PageIndex\), the multiplicativity property, Proposition \(\PageIndex\), and the transpose property, Proposition \(\PageIndex\).

Proposition \(\PageIndex\): Invertibility Property

A square matrix is invertible if and only if \(\det(A)\neq 0\).

Proof

If \(A\) is invertible, then it has a pivot in every row and column by the Theorem 3.6.1 in Section 3.6, so its reduced row echelon form is the identity matrix. Since row operations do not change whether the determinant is zero, and since \(\det(I_n) = 1\text\) this implies \(\det(A)\neq 0.\) Conversely, if \(A\) is not invertible, then it is row equivalent to a matrix with a zero row. Again, row operations do not change whether the determinant is nonzero, so in this case \(\det(A) = 0.\)

By the invertibility property, a matrix that does not satisfy any of the properties of the Theorem 3.6.1 in Section 3.6 has zero determinant.

Corollary \(\PageIndex\)

Let \(A\) be a square matrix. If the rows or columns of \(A\) are linearly dependent, then \(\det(A)=0\).

Proof

If the columns of \(A\) are linearly dependent, then \(A\) is not invertible by condition 4 of the Theorem 3.6.1 in Section 3.6. Suppose now that the rows of \(A\) are linearly dependent. If \(r_1,r_2,\ldots,r_n\) are the rows of \(A\text\) then one of the rows is in the span of the others, so we have an equation like

\[ r_2 = 3r_1 - r_3 + 2r_4. \nonumber \]

If we perform the following row operations on \(A\text\)

\[ R_2 = R_2 - 3R_1;\quad R_2 = R_2 + R_3;\quad R_2 = R_2 - 2R_4 \nonumber \]

then the second row of the resulting matrix is zero. Hence \(A\) is not invertible in this case either.

Alternatively, if the rows of \(A\) are linearly dependent, then one can combine condition 4 of the Theorem 3.6.1 in Section 3.6 and the transpose property, Proposition \(\PageIndex\) below to conclude that \(\det(A)=0\).

In particular, if two rows/columns of \(A\) are multiples of each other, then \(\det(A)=0.\) We also recover the fact that a matrix with a row or column of zeros has determinant zero.

Example \(\PageIndex\)

The following matrices all have zero determinant:

The proofs of the multiplicativity property, Proposition \(\PageIndex\), and the transpose property, \(\PageIndex\), below, as well as the cofactor expansion theorem, Theorem 4.2.1 in Section 4.2, and the determinants and volumes theorem, Theorem 4.3.2 in Section 4.3, use the following strategy: define another function \(d\colon\\> \to \mathbb\text\) and prove that \(d\) satisfies the same four defining properties as the determinant. By the existence theorem, Theorem \(\PageIndex\), the function \(d\) is equal to the determinant. This is an advantage of defining a function via its properties: in order to prove it is equal to another function, one only has to check the defining properties.

Proposition \(\PageIndex\): Multiplicativity Property

If \(A\) and \(B\) are \(n\times n\) matrices, then

\[ \det(AB) = \det(A)\det(B). \nonumber \]

Proof

In this proof, we need to use the notion of an . This is a matrix obtained by doing one row operation to the identity matrix. There are three kinds of elementary matrices: those arising from row replacement, row scaling, and row swaps:

The important property of elementary matrices is the following claim.

Claim: If \(E\) is the elementary matrix for a row operation, then \(EA\) is the matrix obtained by performing the same row operation on \(A\).

In other words, left-multiplication by an elementary matrix applies a row operation. For example,

The proof of the Claim is by direct calculation; we leave it to the reader to generalize the above equalities to \(n\times n\) matrices.

As a consequence of the Claim and the four defining properties, Definition \(\PageIndex\), we have the following observation. Let \(C\) be any square matrix.

  1. If \(E\) is the elementary matrix for a row replacement, then \(\det(EC) = \det(C).\) In other words, left-multiplication by \(E\) does not change the determinant.
  2. If \(E\) is the elementary matrix for a row scale by a factor of \(c\text\) then \(\det(EC) = c\det(C).\) In other words, left-multiplication by \(E\) scales the determinant by a factor of \(c\).
  3. If \(E\) is the elementary matrix for a row swap, then \(\det(EC) = -\det(C).\) In other words, left-multiplication by \(E\) negates the determinant.

Since \(d\) satisfies the four defining properties of the determinant, it is equal to the determinant by the existence theorem \(\PageIndex\). In other words, for all matrices \(A\text\) we have

Multiplying through by \(\det(B)\) gives \(\det(A)\det(B)=\det(AB).\)

  1. Let \(C'\) be the matrix obtained by swapping two rows of \(C\text\) and let \(E\) be the elementary matrix for this row replacement, so \(C' = EC\). Since left-multiplication by \(E\) negates the determinant, we have \(\det(ECB) = -\det(CB)\text\) so
    \[ d(C') = \frac= \frac= \frac= -d(C). \nonumber \]
  2. We have
    \[ d(I_n) = \frac= \frac= 1. \nonumber \]

Now we turn to the proof of the multiplicativity property. Suppose to begin that \(B\) is not invertible. Then \(AB\) is also not invertible: otherwise, \((AB)^ AB = I_n\) implies \(B^ = (AB)^ A.\) By the invertibility property, Proposition \(\PageIndex\), both sides of the equation \(\det(AB) = \det(A)\det(B)\) are zero.

Now assume that \(B\) is invertible, so \(\det(B)\neq 0\). Define a function

We claim that \(d\) satisfies the four defining properties of the determinant.

  1. Let \(C'\) be the matrix obtained by doing a row replacement on \(C\text\) and let \(E\) be the elementary matrix for this row replacement, so \(C' = EC\). Since left-multiplication by \(E\) does not change the determinant, we have \(\det(ECB) = \det(CB)\text\) so
    \[ d(C') = \frac= \frac= \frac= d(C). \nonumber \]
  2. Let \(C'\) be the matrix obtained by scaling a row of \(C\) by a factor of \(c\text\) and let \(E\) be the elementary matrix for this row replacement, so \(C' = EC\). Since left-multiplication by \(E\) scales the determinant by a factor of \(c\text\) we have \(\det(ECB) = c\det(CB)\text\) so
    \[ d(C') = \frac= \frac= \frac= c\cdot d(C). \nonumber \]

Recall that taking a power of a square matrix \(A\) means taking products of \(A\) with itself:

\[ A^2 = AA \qquad A^3 = AAA \qquad \text \nonumber \]

If \(A\) is invertible, then we define

For completeness, we set \(A^0 = I_n\) if \(A\neq 0\).

Corollary \(\PageIndex\)

If \(A\) is a square matrix, then

\[ \det(A^n) = \det(A)^n \nonumber \]

for all \(n\geq 1\). If \(A\) is invertible, then the equation holds for all \(n\leq 0\) as well; in particular,

Proof

Using the multiplicativity property, Proposition \(\PageIndex\), we compute

\[ \det(A^2) = \det(AA) = \det(A)\det(A) = \det(A)^2 \nonumber \]

\[ \det(A^3) = \det(AAA) = \det(A)\det(AA) = \det(A)\det(A)\det(A) = \det(A)^3; \nonumber \]

the pattern is clear.

\[ 1 = \det(I_n) = \det(A A^) = \det(A)\det(A^) \nonumber \]

by the multiplicativity property, Proposition \(\PageIndex\) and the fourth defining property, Definition \(\PageIndex\), which shows that \(\det(A^) = \det(A)^\). Thus

Example \(\PageIndex\)
Solution

We have \(\det(A) = 4 - 2 = 2\text\) so

Nowhere did we have to compute the \(100\)th power of \(A\text\) (We will learn an efficient way to do that in Section 5.4.)

Here is another application of the multiplicativity property, Proposition \(\PageIndex\).

Corollary \(\PageIndex\)

Let \(A_1,A_2,\ldots,A_k\) be \(n\times n\) matrices. Then the product \(A_1A_2\cdots A_k\) is invertible if and only if each \(A_i\) is invertible.

Proof

The determinant of the product is the product of the determinants by the multiplicativity property, Proposition \(\PageIndex\):

\[ \det(A_1A_2\cdots A_k) = \det(A_1)\det(A_2)\cdots\det(A_k). \nonumber \]

By the invertibility property, Proposition \(\PageIndex\), this is nonzero if and only if \(A_1A_2\cdots A_k\) is invertible. On the other hand, \(\det(A_1)\det(A_2)\cdots\det(A_k)\) is nonzero if and only if each \(\det(A_i)\neq0\text\) which means each \(A_i\) is invertible.

Example \(\PageIndex\)

For any number \(n\) we define

Show that the product

\[ A_1 A_2 A_3 A_4 A_5 \nonumber \]

is not invertible.

Solution

When \(n = 2\text\) the matrix \(A_2\) is not invertible, because its rows are identical:

Hence any product involving \(A_2\) is not invertible.

In order to state the transpose property, we need to define the transpose of a matrix.

Definition \(\PageIndex\): Transpose

The of an \(m\times n\) matrix \(A\) is the \(n\times m\) matrix \(A^T\) whose rows are the columns of \(A\). In other words, the \(ij\) entry of \(A^T\) is \(a_\).

clipboard_e27c14330239fdede3a41a0277d2074ec.png

Like inversion, transposition reverses the order of matrix multiplication.

Fact \(\PageIndex\)

Let \(A\) be an \(m\times n\) matrix, and let \(B\) be an \(n\times p\) matrix. Then

\[ (AB)^T = B^TA^T. \nonumber \]

Proof

First suppose that \(A\) is a row vector an \(B\) is a column vector, i.e., \(m = p = 1\). Then

\[\beginAB&=\left(\begina_1 &a_2&\cdots &a_n\end\right)\left(\beginb_1\\b_2\\ \vdots\\b_n\end\right)=a_1b_1+a_2b_2+\cdots +a_nb_n \\ &=\left(\beginb_1&b_2&\cdots &b_n\end\right)\left(\begina_1\\a_2\\ \vdots\\a_n\end\right)=B^TA^T.\end\]

Now we use the row-column rule for matrix multiplication. Let \(r_1,r_2,\ldots,r_m\) be the rows of \(A\text\) and let \(c_1,c_2,\ldots,c_p\) be the columns of \(B\text\) so

\[AB=\left(\begin—r_1 —\\ —r_2— \\ \vdots \\ —r_m—\end\right)\left(\begin|&|&\quad &| \\ c_1&c_2&\cdots &c_p \\ |&|&\quad &|\end\right)=\left(\beginr_1c_1&r_1c_2&\cdots &r_1c_p \\ r_2c_1&r_2c_2&\cdots &r_2c_p \\ \vdots &\vdots &<>&\vdots \\ r_mc_1&r_mc_2&\cdots &r_mc_p\end\right).\nonumber\]

By the case we handled above, we have \(r_ic_j = c_j^Tr_i^T\). Then

\[\begin(AB)^T&=\left(\beginr_1c_1&r_2c_1&\cdots &r_mc_1 \\ r_1c_2&r_2c_2&\cdots &r_mc_2 \\ \vdots &\vdots &<>&\vdots \\ r_1c_p&r_2c_p&\cdots &r_mc_p\end\right) \\ &=\left(\beginc_1^Tr_1^T &c_1^Tr_2^T&\cdots &c_1^Tr_m^T \\ c_2^Tr_1^T&c_2^Tr_2^T&\cdots &c_2^Tr_m^T \\ \vdots&\vdots&<>&\vdots \\ c_p^Tr_1^T&c_p^Tr_2^T&\cdots&c_p^Tr_m^T\end\right) \\ &=\left(\begin—c_1^T— \\ —c_2^T— \\ \vdots \\ —c_p^T—\end\right)\left(\begin|&|&\quad&| \\ r_1^T&r_2^T&\cdots&r_m^T \\ |&|&\quad&|\end\right)=B^TA^T.\end\]

Proposition \(\PageIndex\): Transpose Property

For any square matrix \(A\text\) we have

\[ \det(A) = \det(A^T). \nonumber \]

Proof

We follow the same strategy as in the proof of the multiplicativity property, Proposition \(\PageIndex\): namely, we define

\[ d(A) = \det(A^T), \nonumber \]

and we show that \(d\) satisfies the four defining properties of the determinant. Again we use elementary matrices, also introduced in the proof of the multiplicativity property, Proposition \(\PageIndex\).

  1. Let \(C'\) be the matrix obtained by doing a row replacement on \(C\text\) and let \(E\) be the elementary matrix for this row replacement, so \(C' = EC\). The elementary matrix for a row replacement is either upper-triangular or lower-triangular, with ones on the diagonal: \[R_1=R_1+3R_3:\left(\begin1&0&3\\0&1&0\\0&0&1\end\right)\quad R_3=R_3+3R_1:\left(\begin1&0&0\\0&1&0\\3&0&1\end\right).\nonumber\]
    It follows that \(E^T\) is also either upper-triangular or lower-triangular, with ones on the diagonal, so \(\det(E^T) = 1\) by this Proposition \(\PageIndex\). By the Fact \(\PageIndex\) and the multiplicativity property, Proposition \(\PageIndex\), \[ \begin d(C') \amp= \det((C')^T) = \det((EC)^T) = \det(C^TE^T) \\ \amp= \det(C^T)\det(E^T) = \det(C^T) = d(C). \end \nonumber \]
  2. Let \(C'\) be the matrix obtained by scaling a row of \(C\) by a factor of \(c\text\) and let \(E\) be the elementary matrix for this row replacement, so \(C' = EC\). Then \(E\) is a diagonal matrix: \[R_2=cR_2:\: \left(\begin1&0&0\\0&c&0\\0&0&1\end\right).\nonumber\] Thus \(\det(E^T) = c\). By the Fact \(\PageIndex\) and the multiplicativity property, Proposition \(\PageIndex\), \[ \begin d(C') \amp= \det((C')^T) = \det((EC)^T) = \det(C^TE^T) \\ \amp= \det(C^T)\det(E^T) = c\det(C^T) = c\cdot d(C). \end \nonumber \]
  3. Let \(C'\) be the matrix obtained by swapping two rows of \(C\text\) and let \(E\) be the elementary matrix for this row replacement, so \(C' = EC\). The \(E\) is equal to its own transpose: \[R_1\longleftrightarrow R_2:\:\left(\begin0&1&0\\1&0&0\\0&0&1\end\right)=\left(\begin0&1&0\\1&0&0\\0&0&1\end\right)^T.\nonumber\] Since \(E\) (hence \(E^T\)) is obtained by performing one row swap on the identity matrix, we have \(\det(E^T) = -1\). By the Fact \(\PageIndex\) and the multiplicativity property, Proposition \(\PageIndex\), \[ \begin d(C') \amp= \det((C')^T) = \det((EC)^T) = \det(C^TE^T) \\ \amp= \det(C^T)\det(E^T) = -\det(C^T) = - d(C). \end \nonumber \]
  4. Since \(I_n^T = I_n,\) we have \[ d(I_n) = \det(I_n^T) = det(I_n) = 1. \nonumber \] Since \(d\) satisfies the four defining properties of the determinant, it is equal to the determinant by the existence theorem \(\PageIndex\). In other words, for all matrices \(A\text\) we have\[ \det(A) = d(A) = \det(A^T). \nonumber \]

The transpose property, Proposition \(\PageIndex\), is very useful. For concreteness, we note that \(\det(A)=\det(A^T)\) means, for instance, that

This implies that the determinant has the curious feature that it also behaves well with respect to column operations. Indeed, a column operation on \(A\) is the same as a row operation on \(A^T\text\) and \(\det(A) = \det(A^T)\).

Corollary \(\PageIndex\)

The determinant satisfies the following properties with respect to column operations:

  1. Doing a column replacement on \(A\) does not change \(\det(A)\).
  2. Scaling a column of \(A\) by a scalar \(c\) multiplies the determinant by \(c\).
  3. Swapping two columns of a matrix multiplies the determinant by \(-1\).

The previous corollary makes it easier to compute the determinant: one is allowed to do row and column operations when simplifying the matrix. (Of course, one still has to keep track of how the row and column operations change the determinant.)

Example \(\PageIndex\)
Solution

It takes fewer column operations than row operations to make this matrix upper-triangular:

We performed two column replacements, which does not change the determinant; therefore,

Multilinearity

The following observation is useful for theoretical purposes.

We can think of \(\det\) as a function of the rows of a matrix:

\[ \det(v_1,v_2,\ldots,v_n) = \det\left(\begin—v_1— \\ —v_2— \\ \vdots \\ —v_n—\end\right). \nonumber \]

Proposition \(\PageIndex\): Multilinearity Property

Let \(i\) be a whole number between \(1\) and \(n\text\) and fix \(n-1\) vectors \(v_1,v_2,\ldots,v_,v_,\ldots,v_n\) in \(\mathbb^n \). Then the transformation \(T\colon\mathbb^n \to\mathbb\) defined by

\[ T(x) = \det(v_1,v_2,\ldots,v_,x,v_,\ldots,v_n) \nonumber \]

Proof

First assume that \(i=1\text\) so

\[ T(x) = \det(x,v_2,\ldots,v_n). \nonumber \]

We have to show that \(T\) satisfies the defining properties, Definition 3.3.1, in Section 3.3.

For \(i\neq 1\text\) we note that

\[ \begin T(x) \amp= \det(v_1,v_2,\ldots,v_,x,v_,\ldots,v_n) \\ \amp= -\det(x,v_2,\ldots,v_,v_1,v_,\ldots,v_n). \end \nonumber \]

By the previously handled case, we know that \(-T\) is linear:

\[ -T(cx) = -cT(x) \qquad -T(v+w) = -T(v) - T(w). \nonumber \]

Multiplying both sides by \(-1\text\) we see that \(T\) is linear.

For example, we have

By the transpose property, Proposition \(\PageIndex\), the determinant is also multilinear in the columns of a matrix:

Remark: Alternative defining properties

In more theoretical treatments of the topic, where row reduction plays a secondary role, the defining properties of the determinant are often taken to be:

  1. The determinant \(\det(A)\) is multilinear in the rows of \(A\).
  2. If \(A\) has two identical rows, then \(\det(A) = 0\).
  3. The determinant of the identity matrix is equal to one.

We have already shown that our four defining properties, Definition \(\PageIndex\), imply these three. Conversely, we will prove that these three alternative properties imply our four, so that both sets of properties are equivalent.

Defining property \(2\) is just the second defining property, Definition 3.3.1, in Section 3.3. Suppose that the rows of \(A\) are \(v_1,v_2,\ldots,v_n\). If we perform the row replacement \(R_i = R_i + cR_j\) on \(A\text\) then the rows of our new matrix are \(v_1,v_2,\ldots,v_,v_i+cv_j,v_,\ldots,v_n\text\) so by linearity in the \(i\)th row,

\[ \begin \det(\amp v_1,v_2,\ldots,v_,v_i+cv_j,v_,\ldots,v_n) \\ \amp= \det(v_1,v_2,\ldots,v_,v_i,v_,\ldots,v_n) + c\det(v_1,v_2,\ldots,v_,v_j,v_,\ldots,v_n) \\ \amp= \det(v_1,v_2,\ldots,v_,v_i,v_,\ldots,v_n) = \det(A), \end \nonumber \]

where \(\det(v_1,v_2,\ldots,v_,v_j,v_,\ldots,v_n)=0\) because \(v_j\) is repeated. Thus, the alternative defining properties imply our first two defining properties. For the third, suppose that we want to swap row \(i\) with row \(j\). Using the second alternative defining property and multilinearity in the \(i\)th and \(j\)th rows, we have

\[ \begin 0 \amp= \det(v_1,\ldots,v_i+v_j,\ldots,v_i+v_j,\ldots,v_n) \\ \amp= \det(v_1,\ldots,v_i,\ldots,v_i+v_j,\ldots,v_n) + \det(v_1,\ldots,v_j,\ldots,v_i+v_j,\ldots,v_n) \\ \amp= \det(v_1,\ldots,v_i,\ldots,v_i,\ldots,v_n) + \det(v_1,\ldots,v_i,\ldots,v_j,\ldots,v_n) \\ \amp\qquad+\det(v_1,\ldots,v_j,\ldots,v_i,\ldots,v_n) + \det(v_1,\ldots,v_j,\ldots,v_j,\ldots,v_n) \\ \amp= \det(v_1,\ldots,v_i,\ldots,v_j,\ldots,v_n) + \det(v_1,\ldots,v_j,\ldots,v_i,\ldots,v_n), \end \nonumber \]

Example \(\PageIndex\)

This is the basic idea behind cofactor expansions in Section 4.2.

Note \(\PageIndex\): Summary: Magical Properties of the Determinant
  1. There is one and only one function \(\det\colon\\>\to\mathbb\) satisfying the four defining properties, Definition \(\PageIndex\).
  2. The determinant of an upper-triangular or lower-triangular matrix is the product of the diagonal entries.
  3. A square matrix is invertible if and only if \(\det(A)\neq 0\text\) in this case, \[ \det(A^) = \frac 1. \nonumber \]
  4. If \(A\) and \(B\) are \(n\times n\) matrices, then \[ \det(AB) = \det(A)\det(B). \nonumber \]
  5. For any square matrix \(A\text\) we have \[ \det(A^T) = \det(A). \nonumber \]
  6. The determinant can be computed by performing row and/or column operations.

This page titled 4.1: Determinants- Definition is shared under a GNU Free Documentation License 1.3 license and was authored, remixed, and/or curated by Dan Margalit & Joseph Rabinoff via source content that was edited to the style and standards of the LibreTexts platform.

  1. Back to top

Recommended articles

  1. Article type Section or Page Author Dan Margalit & Joseph Rabinoff License GNU FDL License Version 1.3
  2. Tags
    1. determinant
    2. diagonal
    3. invertibility property
    4. lower-triangular
    5. source@https://textbooks.math.gatech.edu/ila
    6. transpose
    7. upper-triangular