Mathematics 270 Linear Algebra I
Study Guide :: Unit 5
General Vector Spaces
In the previous units we learned about the basic properties of vectors in and defined them as -tuples of real numbers. We called a vector space, since its elements are precisely vectors that satisfy certain properties. In this unit we generalize this concept and define general vector spaces, which are sets that are not necessarily equal to , comprised also by elements called vectors that are not necessarily -tuples, that satisfy certain properties.
One can think of a vector space as a mathematical structure formed by a collection of objects called vectors, which may be added together and multiplied (“scaled”) by numbers called scalars in this context. Scalars are often taken to be real numbers. The operations of vector addition and scalar multiplication must satisfy certain requirements called axioms. These axioms, presented in the textbook on page 184, allow for the definition of a vector space. We will see that, under this general definition, the Euclidean space is just a particular case of a vector space and that there can be many more vector spaces with vectors that are not -tuples.
We also introduce the concept of a subspace in this unit. Some vector spaces of interest may be contained (as a subset) within a larger vector space whose properties are known. Under some conditions (to be studied in this unit) some subsets of a known vector space may form a subspace.
Some other concepts introduced in this unit include linear independence, basis and dimension.
The unit ends with an introduction to linear transformations and their application to computer graphics.
Objectives
After completing Unit 5, you should be able to:
- determine whether a given set with two operations is a vector space;
- show that a set with two operations is not a vector space by demonstrating that at least one of the vector space axioms fails;
- determine whether a subset of a vector space is a subspace and show that a subset of a vector space is a subspace;
- show that a nonempty subset of a vector space is not a subspace by demonstrating that the set is either not closed under addition or not closed under scalar multiplication;
- given a set of vectors in and a vector in , determine whether is a linear combination of the vectors in ;
- given a set of vectors in , determine whether the vectors in span ;
- determine whether two nonempty sets of vectors in a vector space span the same subspace of ;
- determine whether a set of vectors is linearly independent or linearly dependent;
- express one vector in a linearly dependent set as a linear combination of the other vectors in the set; and
- use the Wronskian to show that a set of functions is linearly independent.
5.1 Real Vector Spaces
This section introduces you to the study of abstract vector spaces, which, as mentioned in the introduction of this unit, are sets not necessarily equal to , comprised of elements called vectors not necessarily -tuples, that satisfy certain properties.
The theory of linear vector spaces is used in many branches of mathematics, physics and statistics. In chemistry, biology, psychology and even sociology, linear vector spaces play an important role.
To illustrate how some ideas of a general vector space can differ from those of a Euclidean vector space studied in Unit 4, we first recall that in , the addition and multiplication by scalar were defined as
, and
, respectively.
For example, in , this means that
, and
.
Note that, under this addition and scalar multiplication in , the zero vector is determined by . Therefore, if we add zero to any vector in then the vector remains the same, i.e.,
.
Now, we can illustrate, for example, how the idea of addition, scalar multiplication and that of a zero vector can be generalized when studying general vector spaces. From the above example it is clear that a zero vector is defined as a vector such that
.
Under this definition, we consider , but now endowed with a different operation of addition. In particular, let’s define
.
This means, for example, that under addition operation
.
Notice that under this operation of addition the vector does not satisfy the definition of a zero vector, since
.
Therefore is not the zero vector for endowed with this new addition operation. In fact, you can see that
,
and, therefore, is actually the zero vector, i.e.,
.
This statement sounds strange. However, you will become familiar with these ideas in this section. In particular, you will learn how to determine if a given set forms a vector space using the axioms in the definition of a vector space and the step-by-step procedure given in page 184 of the textbook.
Reading Assignment
Read, and study, pages 183-189 of the textbook.
After reading carefully the Vector Space Axioms in Definition 1 on page 184 and studying the steps to show that a given set with two operations is a vector space, review the examples on pages 185-188. Some statements of these examples may seem unusual, but they will become clear as you work with them. For instance, in Example 8 (page 188) you will find that the zero vector of the given space is given by . When going through this example keep in mind that this seemingly strange result is caused by the specific definition of the vector addition and scalar multiplication.
After going through the assigned reading in depth, start working on the following exercises.
Exercises
Work on the following textbook exercises from “Exercise Set 4.1” (pp. 190-191):
- 1, 3, 5, 7, 9, 11, 13, 15 and 27.
5.2 Subspaces
A subspace is a vector space that is contained within another vector space. Therefore, every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. Moreover, the addition and scalar multiplication operations of a subspace are those of the larger vector space that contains it.
We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections. In particular, consider the subset of comprising vectors of the form , where and are real numbers, i.e.,
.
Thus, comprises all the elements whose first two components are the same. For example, belongs to , while does not. Does this set have its own zero and inverse elements? Does this set form a vector space on its own? You will be able to answer these question after completing this section.
Let us consider another example. Recall Example 6 on pages 295-296 of the textbook. In that example it was determined that the vectors
; ,
where and are any real numbers, form the eigenspace of the matrix
.
One can show, for example, that this eigenspace is a subspace of . After the assigned reading, you will be asked in the suggested exercises to prove that certain subsets of particular vector spaces are subspaces.
Reading Assignment
Read, and study, pages 191-200 of the textbook (to “Exercise Set 4.2”).
Once you have read and understood the definition of a subspace on page 191, review the geometrical interpretation of this concept on pages 192-193 through the examples therein.
Examples 7, 8, 9 and 10 on page 194 will illustrate how the concept of subspace can be applied to a variety of sets, such as polynomial, function and matrix sets.
Read Definition 2 and study Theorem 4.2.3 on page 195 together with its proof. Familiarize yourself with the concept of spanning given in Definition 3 on page 196, and review the concept of standard unit vectors in Example 11.
While Example 12 on pages 196-197 provides a geometrical interpretation of spanning within the Euclidean spaces and , Example 13 shows that this concept can also be applied to sets of elements different from -tuples, such as polynomials. Example 15 on page 198 illustrates how to test a given set of vectors for spanning.
Theorem 4.2.4 and Example 16 on pages 198-199 will show how the solution of a homogeneous system of equations and unknowns is a subspace of . However, this is not true for a nonhomogeneous system of equations in unknowns (read the Remark at the end of page 199).
Be thorough with this assigned reading before starting the following exercises.
Exercises
Work on the following textbook exercises from “Exercise Set 4.2” (pp. 200-202):
- 1, 3, 5, 7, 9, 11, 13, and 19.
5.3 Linear Independence, Basis and Dimension
A set of vectors is said to be linearly dependent if one of the vectors in the set can be expressed as a linear combination of the other vectors. To illustrate this definition, consider, for example, the vector in , i.e., the vector whose initial and terminal points are and , respectively. To reach the terminal point starting from the origin, one has to take four steps in the negative direction of the -axis and seven steps in the positive direction of the -axis. Thus, we can express the vector as
.
In other words, the vector can be expressed as a linear combination of the standard vectors , of the Euclidean plane . Thus, we say that the vectors , and are linearly dependent or, in other words, that the set
is linearly dependent.
If, on the other hand, there is not a vector in a given set that can be written as a linear combination of the others, then the vectors are said to be linearly independent. Thus, for example, the set
is linearly independent, since we cannot express as a linear combination (in this case, multiple) of , or vice versa. With the same argument, we can claim that the set
is linearly independent, too.
In the theory of vector spaces the concept of linear dependence and linear independence allows for the definition of basis, which, in turn, allows for the definition of dimension. Let’s illustrate these concepts briefly using the plane and the linearly independent sets and above.
Note that any vector in can be expressed as a linear combination of the vectors in either or . For example, the vector in can be expressed as a linear combination of the vectors in as
and it can also be expressed as a linear combination of the vectors in as
.
Since we can express any vector in as a linear combination of the vectors in the linearly independent set or , we can then say that is a basis for and is also a basis for .
Moreover, one can say the the coordinate vector of relative to is . In other words, one could say that the vector seen from the frame of reference of is , but seen from the frame of reference of is . This is, therefore, an example of how vectors are relative objects to the frame of reference.
Finally, we can briefly introduce the concept of dimension. Specifically, the number of linearly independent vectors in the basis determines the dimension of the vector space they are the basis of. Thus, in our example, it is clear that the dimension of the plane is . Follow the formal definition of this concept in the textbook reading assignment.
Reading Assignment
Read, and study, pages 202-208 (to “Calculus Required”), pages 212-219 (to “Exercise Set 4.4”), and pages 221-225 of the textbook. As an optional reading, we also recommend reading pages 226-227. Remember to read each assigned block until you understand its content.
Through Theorem 4.3.1 and the following examples (Examples 1-3) you will learn how to check for linear independence of vectors belonging to spaces with different dimensions. Note that, in Examples 4 and 5, the concept of linear independence can be applied to sets including geometrical vectors as well as functions (polynomials, for example).
Theorems 4.3.2 and 4.3.3 provide alternative methods for checking the linear independence of given sets. Review Example 6 and the geometrical interpretation of linear independence given on page 207. Note that this course does not require reading the materials on pages 208-209 (“Calculus Required”).
In Unit 4 we learned how to determine vector coordinates using a rectangular coordinate system in two- and three-dimensional spaces. After reading pages 212-213, we will see that the vectors can be determined in a nonrectangular coordinate systems, too. In other words, the same vector can be determined in different coordinate systems. Thus, there can be different bases for the same space. These different bases, however, all have to span the given space (Definition 1 on page 214). Examples 1, 2, 3 and 4 on pages 214-215 illustrate standard bases of different spaces.
Even though there can be different bases for the same space, the representation of a given vector is unique for each basis. This uniqueness is proven in Theorem 4.4.1. Definition 2 and Examples 7, 8, and 9 provide a method for constructing a coordinate vector relative to the given basis.
After the assigned reading of Section 4.5 of the textbook the concept of dimension will be fully understood.
Exercises
Work on the following textbook exercises from “Exercise Set 4.3” (pp. 210-212):
- 1, 3, 4, 5, 7, 9, 13, and 25;
from “Exercise Set 4.4” (pp. 219-221):
- 1, 3, 5, 7, 11, 13, 15, 19, and 21; and
from “Exercise Set 4.5” (p. 228):
- 3, 5, 7, 9, 13, 15, and 17.
5.4 Introduction to Linear Transformations: Basic Matrix Transformations in and
To introduce the idea of a linear transformation, consider, for example, an object located at point of the -coordinate system. Assume that, under a transformation, this object is moved to the point . Now, observe that
As you can see, the displacement (transformation) is described by the matrix
We will call A a matrix transformation or matrix operator.
In this section we will study some basic types of matrix transformations, which can be interpreted geometrically in two- and three-dimensional spaces. These transformations are important in computer graphics, physics and other areas of applied mathematics. In the next section we analyze one of these applications in more detail.
Reading Assignment
Read, and study, pages 259-266 of the textbook (to “Orthogonal Projections onto Lines Through the Origin”). Remember the importance of gaining a solid understanding of the assigned reading before working on the following exercises.
In this reading you will be exposed to different basic matrix transformations in the vector spaces and . Some of these transformations are shown in Tables 1 and 2 (reflections), Tables 3 and 4 (projections), Tables 5 and 6 (rotations), Tables 7 and 8 (contractions and dilations), and Tables 9 and 10 (expansions and compressions) on pages 259-266. Review all the examples therein.
Exercises
Work on the following textbook exercises from “Exercise Set 4.9” (pp. 268-279):
- 1, 3, 5, 7, 9, 11, 15, 21.
5.5 Applications of Linear Algebra to Computer Graphics: Transforming Images with Matrix Operators
The area of computer graphics is a vast and ever-expanding field. In the simplest sense, computer graphics are images viewable on a computer screen. Their applications extend into such processes as engineering design programs and almost any type of media. Images are generated using computers and are manipulated by computers. Underlying the representation of images on a computer screen is the mathematics of linear algebra. We will explore the fundamentals of how computers use linear algebra to create these images, and then branch off into the basic manipulation of these images.
In applications such as computer graphics it is important to understand not only how linear operators on and affect individual vectors, but also how they affect two-dimensional or three-dimensional regions. Thus, this section focuses on the effect of matrix operators on certain regions.
Reading Assignment
Read, and study, pages 280-287 of the textbook (to “Exercise Set 4.11”). Repeat the reading as many times as needed to gain a solid understanding of its content.
In the previous section of this Study Guide, we studied some basic transformations in and . The current section provides examples on the effect of these matrix operators in on basic geometrical objects such as lines and squares. The figure on page 280 of the textbook illustrates this effect.
Theorem 4.11.1 on page 281 provides a relationship between some basic geometrical objects and their images. Examples 1, 2 and 3 illustrate the theorem by applying matrix operators to lines and unit squares. Sometimes more than one operator can be applied to the given geometrical object. In Example 3 (page 282 of the textbook) you can observe the application of two consecutive transformations on a unit square. In Table 1 (pages 283-284), some typical matrix operators are shown together with their effect on the unit square.
Theorem 4.11.2 and its proof on pages 284-285, together with Theorem 4.11.3 (page 285) and your knowledge on how an invertible matrix can be expresses as a product of elementary matrices, will allow you to understand the geometrical effect of a multiplication of a matrix (see Example 4 on page 285).
Review Examples 5, 6 and 7 (pages 586-287) on seeing further effects of applying basic matrix operators on the unit square.
Exercises
Work on the following textbook exercises from “Exercise Set 4.11” (pp. 287-289):
- 1, 3, 5, 7, 9, 11, 15, 17, 21, 25.