Mathematics 270 Linear Algebra I
Study Guide :: Unit 2
Inverse of a Matrix, Linear Systems and Special Forms of Matrices
In the previous unit we saw that matrix arithmetics has analogous properties to the arithmetics of numbers. The same occurs with the “inverse of a matrix.” As we know, if a real number is different from zero, then there is an “inverse of ,” denoted as , which is calculated as and has the property that . The first part of this unit is devoted to finding the analogous case for matrices. In other words, if we are given a matrix , we want to find the conditions on the matrix and a methodology such that we can find its inverse, . Thus, after reading the first two sections of this unit you will be able, when possible, to calculate the inverse of a matrix and understand that its definition implies that , where is the identity matrix.
You may be asking yourself, why do we want to define or study the “inverse,” , of a matrix ?
The answer to this question is provided in the second part of this unit, where the focus is to explain how the inverse of a matrix is used for the purpose of obtaining information about—or even solving—a system of linear equations.
The focus of this unit, therefore, is the study of the inverse of a matrix, its properties and its relation to systems of linear equations.
Objectives
After completing Unit 2, you should be able to:
- determine whether or not a matrix is invertible;
- know and use the properties of the transpose and inverse matrices;
- prove basic properties involving invertible matrices;
- understand the concept of elementary matrices;
- use row operations to reduce a square matrix to the identity matrix;
- understand the relation between the inverse of a matrix and linear systems;
- understand the Equivalent Statements Theorem;
- find the inverse of a matrix using elementary matrices and row operations;
- solve linear systems using the inverse of the coefficient matrix;
- determine consistency of a linear system and determine whether it has one solution, infinitely many solutions or no solutions;
- understand what diagonal, triangular and symmetric matrices are; and
- construct and analyze a Leontief Input-Output model given the information about an economy.
2.1 Inverse of a Matrix and Its Properties
This section introduces the definition of the inverse of a matrix and some of its algebraic properties. In particular, given a square matrix of size , if there exists a matrix such that , where denotes the identity matrix, then we say that is “invertible” and its “inverse” is the matrix , which is denoted as .
From this definition there are three aspects worth noticing. The first one is that the definition of an inverse only refers to square matrices. The second aspect is that even if a matrix is square, there is no guarantee that the inverse of that matrix exists. Finally, the identity matrix plays the analogous role that the number plays in the real numbers (since ). This role was already pointed out in the last section of Unit 1, where it was noted that the identity matrix, , can be seen as analogous to number in the sense that the multiplication of any matrix times the identity results in the matrix .
Reading Assignment
Read, and study, pages A1–A4 (“Appendix A” after p. 713), and pages 43–49 of the textbook (to “Exercise Set 1.4”).
Since the properties of the inverse of a matrix are stated as theorems and properly justified by proofs, the first part of the reading in “Appendix A” of the textbook aims at helping you understand and becoming familiar with the basic structure of theorems. Keep in mind that the information given in this appendix will be useful throughout the whole course.
The reading also introduces you to the conditions under which a matrix of size is invertible and offers you a simple formula for finding its inverse. Specifically, Anton and Rorres present the following theorem (p. 45):
is invertible if and only if , in which case the inverse is given by
Notice that the only condition for the inverse of a matrix to exist is that . The quantity is called the determinant of A and is denoted as
Thus, Theorem 1 can be restated as
is invertible if and only if , in which case the inverse is given by
Historically, the Equation (2.3) for the inverse of a matrix was introduced in the second part of the nineteenth century by the British mathematician Arthur Cayley, and even though Theorem 2 and Equation (2.3) are presented with no proof in this section, you will use them to calculate the inverse of matrices and to solve systems of two linear equations with two unknowns.
Specifically, you will see in “Example 8” of the textbook (p. 45) that if the determinant of a coefficient matrix of a system of two linear equations with two unknowns is not equal to zero (i.e., is invertible according to Theorem 2), then the system of equations has a unique solution. This is the reason for the name “determinant” of , since it is this quantity, , that “determines” whether or not the system has a unique solution.
Exercises
Work on the following textbook exercises from “Exercise Set 1.4” (pp. 49–51):
- 3, 5, 7, 11, 13, 15, 16, 17, 19, 21, 25, 27, 31, 33(a); and
- “True-False Exercises” (a, b, d, e, f).
2.2 A Method for Finding the Inverse
The main objective of this section is to provide a step-by-step method for finding the inverse of a given square matrix of any size. This methodology can be summarized in the following theorem:
Theorem 3. If a square matrix of size can be reduced to the identity matrix using elementary row operations (i.e., the reduced row echelon form of is ), then the inverse can be found by applying the same row operations to the identity matrix .
To illustrate this methodology, let’s suppose we want to find the inverse of the matrix
We know that if we multiply the first row of by (row operation ) and then multiply the first row of the resulting matrix by 2 and add it up to the second row (row operation ), we obtain the identity matrix as the reduced row echelon form of . In other words,
According to Theorem 3, we can find the inverse by applying these same row operations ( and ) to the identity matrix . Thus,
You can easily verify in this example that .
Notice that we apply the same elementary row operations to both the matrix and the identity matrix . Thus, in order to apply these operations at the same time, the results of the procedure for finding the inverse are typically presented in the following form
.
Elementary Matrices and Equivalent Statements Theorem
In order to provide a formal proof of the step-by-step methodology for finding the inverse of a matrix presented in Theorem 3, it is necessary to introduce the concept of an “elementary matrix,” which is defined as a matrix that is obtained from applying a single row operation to an identity matrix. For example, the matrix
in Equation (2.6) is an elementary matrix obtained by multiplying the first row of the identity matrix by .
Now, with the definition of an elementary matrix, we are able to establish an equivalent relationship between applying a row operation on a given matrix and multiplying the corresponding elementary matrix (of that row operation) to the matrix . To illustrate this, consider, for example, the matrix given by Equation (2.4). By multiplying its first row by (operation , we obtain a matrix , i.e.,
Notice now that this matrix can also be obtained by multiplying the matrix by the elementary matrix given by Equation (2.7) (obtained by multiplying the first row of the identity matrix by ), i.e.,
With this equivalent relationship in mind, you will be able to understand the proof of the following theorem presented in the textbook (p. 53):
Theorem 4. Equivalent Statements
If is an matrix, then the following statements are equivalent (they are all true or all false):
- is invertible.
- has only the trivial solution.
- The reduced row echelon form of is .
- can be expressed as a product of elementary matrices.
After completing the assigned reading you will see that the proof of this theorem does actually contains the methodology for finding the inverse of a matrix explained at the beginning of this section. In other words, it contains the proof of Theorem 3.
The importance of Theorem 4 stems from the fact that it groups together many of the linear algebra topics studied in this course. Until now, it is clear that it associates results about the invertibility of matrices, homogenous linear systems, reduced row echelon forms, and elementary matrices. We will be able to keep adding more statements to it as we advance through the course. We will, for example, be able to add results related to nonhomogeneous linear systems and determinants of matrices. Thus, it is very important to keep this theorem is mind throughout the course. It will serve as an excellent skeleton on which to put the muscle of our linear algebra knowledge.
Reading Assignment
Read, and study, pages 52–58 of the textbook (to “Exercise Set 1.5”). This reading provides all the necessary details to understand and prove the Equivalent Statements Theorem 4 and, therefore, the methodology, given by Theorem 3, for finding the inverse of a matrix. You will have the opportunity to see, with examples, how the results are applied.
Exercises
Work on the following textbook exercises from “Exercise Set 1.5” (pp. 58–60):
- 1(a, b, d), 3, 5(a, b), 7(a, d), 9, 11, 15, 17, 19, 20, 21, 23, 25, 27, 33, 35, 37; and
- “True-False Exercises” (a, b, c, d, f).
2.3 Linear Systems and the Inverse of a Matrix
Now that we have some knowledge of the inverse of a matrix and how to calculate it, we are able to come back to the question posed at the introduction of this unit: why do we want to define or study the “inverse,” , of a matrix ?
The main reason is that its definition, its existence (or non-existence) and even its properties provide us with useful information about the solution of a system of linear equations.
To illustrate briefly the importance of the inverse of a matrix, imagine that you want to solve the linear system given in its matrix form by Equation (1.8), where the matrix is square. In other words, imagine that you want to solve the following linear system with the same number of equations than unknowns,
If we could find the inverse of the coefficient matrix , we could then multiply both sides of Equation (2.10) by and obtain that
Now, noticing that , Equation (2.11) becomes
or, equivalently,
Thus, if the inverse of the coefficient matrix exists and is found, then the linear system given by Equation (2.10) can be solved and its solution can be given explicitly by Equation (2.13). This seems to offer us another statement to add to the Equivalent Statements Theorem 4. In fact, after completing the corresponding reading, you will be able to prove the next extended version of the Equivalent Statements Theorem.
Theorem 5. Equivalent Statements
If is an matrix, then the following statements are equivalent (they are all true or all false):
- is invertible.
- has only the trivial solution.
- The reduced row echelon form of is .
- can be expressed as a product of elementary matrices.
- is consistent for every matrix .
- has exactly one solution for every vector matrix .
Reading Assignment
Read, and study, pages 61–66 of the textbook (to “Exercise Set 1.6”). This textbook reading guides you through the fascinating relation between the inverse of a matrix and a system of linear equations. Pay close attention to the examples for determining consistency conditions in the textbook (“Examples 3 and 4”, pp. 65–66).
Exercises
Work on the following textbook exercises from “Exercise Set 1.6” (pp. 66–67):
- 1, 3, 5, 7, 9, 13, 15, 18, 19, 21, 23 and
- “True-False Exercises” (a, b, f).
2.4 Special Forms of Matrices
This section introduces three special forms of matrices that appear commonly in linear algebra applications:
- Diagonal matrices.
- Triangular matrices.
- Symmetric matrices.
Reading Assignment
Read, and study, pages 67–72 of the textbook (to “Exercise Set 1.7”).
These pages introduce the definitions and properties of the diagonal, triangular and symmetric matrices.
Exercises
Work on the following textbook exercises from “Exercise Set 1.7” (pp. 72–75):
- 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 26, 27, 29, 31, 35, 41, 47; and
- “True-False Exercises” (a, b, c, d, h, i, j, m).
2.5 Applications of Linear Algebra to Economic Systems: Leontief Economic Models
This section introduces a fascinating example of how linear algebra can be applied to the field of economics. In particular, the section discusses the Leontief Input-Output Models, linear algebra models that were worth a Nobel prize in 1973, developed by the Russian-American economist Wassily Leontief. Leontief’s analysis is based on the division of an economy in sectors, which, on the one hand, can produce “outputs” and, on the other hand, require “inputs” from other sectors. This information, organized in matrices, allows the study of the influence of different sectors in an economy on each other.
Reading Assignment
Read, and study, pages 96–100 of the textbook (to “Exercise Set 1.10”). This reading provides a comprehensible introduction to the analysis of an economy when it is divided into sectors using linear algebra. The linear algebra models arising from this analysis are called, due to their developer, Leontief’s Input-Output Models. The reading illustrates, for example, how an economy can meet its production demands, or under which conditions an economy is productive. Thus, close attention should be paid to the examples provided in the reading.
Exercises
Work on the following textbook exercises from “Exercise Set 1.10” (pp. 100–101):
- 1, 3, 5, 7, 11; and
- “True-False Exercises” (a, b, c, d, e).