6 | Vector spaces
6 | Vector spaces
This chapter of Linear Algebra by Dr JH Klopper is licensed under an Attribution-NonCommercial-NoDerivatives 4.0 International Licence available at http://creativecommons.org/licenses/by-nc-nd/4.0/?ref=chooser-v1 .
6.1 Introduction
6.1 Introduction
A vector space is a collection of objects known as vectors, which can be added together and multiplied by scalars (numbers) to produce new vectors. Vector spaces are not limited to the familiar three-dimensional space but extend to spaces of any dimension, encompassing a wide array of mathematical entities such as functions, polynomials, and matrices. The beauty of vector spaces lies in their generality and abstraction, allowing for a unified approach to solving diverse problems across mathematics.
This chapter explores vector spaces by considering the axioms of vectors spaces, theorems that follow from the axioms, and examples of vectors spaces.
6.2 Vector spaces
6.2 Vector spaces
6.2.1 Axioms of a vector space
6.2.1 Axioms of a vector space
Before we explore the axioms of a vector space, we have to define a Cartesian product.
Definition 6.2.1.1 A Cartesian product is the ordered combination of two elements, and , from a set, and we denote it as in (1).
a
b
V
(a,b)|a,b∈V
(
1
)In (2), we list the binary operation and then indicate the combination (Cartesian product) of two element of the set , resulting in another element of the set .
V
V
⊕(a,b):VV,wherea,b∈Vand⊕(a,b)∈V
(
2
)We usually say maps to for the symbol, . In this case we say that the binary operation on elements and maps to . While we used for vector, let’s consider the real numbers and the binary operation of addition. We have that both and as elements of the real numbers and we write . The Cartesian product of these two elements can be written as . Under the binary operation of addition of real numbers, , the Cartesian product is mapped to as in .
a
b
V
V
V
1
3
1,3∈
(1,3)
+
(1,3)
4∈
+(1,3)=4
It may all sound a bit frivolous, but when we study modern algebra, this is very important.
Vectors spaces are defined over a field, which we shall denote as . While we need to formally define a field, we will instead only consider the common fields of real and complex numbers, without definition.
We will make mention of the fact that fields have a multiplicative identity. For both and , we have that the multiplicative identity, usually denoted as , is or then , where =-1. Since , we will refer to the field by default.
=
=
e
1
1+0i
2
i
⊂
For the axioms of a vector space, we require two operations. These will be termed vector addition and scalar-vector multiplication (scalar multiplication for short). We will denote most vector fields as . Addition is then and scalar multiplication is . These two operations indicate closure.
V
⊕:V×VV
⊙:×VV
Definition 6.2.1.2 If are two elements of a vector space and a binary operation is applied to the two elements of then if then we have closure under the binary operation.
a,b
V
a∘b
V
a∘b∈V
Definition 6.2.1.3 Consider a set of vectors . For all , scalars , and , we have the following axioms for a vector space.
V
u,v,w∈V
e,,∈
k
1
k
2
1∈
ID | Name | Axiom |
A1 | Associativityofaddition | u⊕(v⊕w)=(u⊕v)∘w |
A2 | Commutativeofaddition | u⊕v=v⊕u |
A3 | Additiveidentity | ∃e∈V|e⊕u=u |
A4 | Additiveinverse | ∀u∈V,∃ * u * u * u |
M1 | Associativityofscalarmultiplication | k 1 k 2 k 1 k 2 |
M2 | Multiplicativeindentity | ∃1∈|1⊙u=u |
D1 | Scalardistributionoveraddition | k⊙(u⊕v)=(k⊙u)⊕(k⊙v) |
D2 | Distibutionofscalaraddition | ( k 1 k 2 k 1 k 2 |
Alternative axiom names: D1 is distributivity of scalar multiplication with respect to vector addition and D2 is distributivity of scalar multiplication with respect to field addition.
Note the two different addition operations, and . We write for + as this defines the addition of two real or complex numbers.
+
⊕
+
k
1
k
2
Now we need to see if the set of vectors in is a member of the mathematical structure called a vector space. We also investigate other examples of vector spaces. We will omit the symbols for convenience and write for instead of , where is a vector space and denotes the binary operation of vector multiplication.
n
⊙
ab
a,b∈V
ab
V
6.2.2 The vector space of vectors in n
6.2.2 The vector space of vectors in
n
Definition 6.2.2.1 Vectors in the set are tuples of complex numbers, denoted as column vectors, and is defined in (3).
n
z=
,∈,i={1,2,…,n}
z 1 |
z 2 |
⋮ |
z n |
z
i
(
3
)This vector space is denoted as (), the vector space of complex vectors over the field of complex numbers.
n
Let and let , be defined as in (4).
V=()
n
z,w∈V
z=
,w=
,,∈,i={1,2,…,n}
z 1 |
z 2 |
⋮ |
z n |
w 1 |
w 2 |
⋮ |
w n |
z
i
w
i
(
4
)Definition 6.2.2.2 The addition of two vectors over the field of complex numbers, and , is defined in (5).
z
w
z+w=
z 1 w 1 |
z 2 w 2 |
⋮ |
z n w n |
(
5
)Definition 6.2.2.3 The scalar-vector multiplication of a scalar and the vector is defined in (6).
α∈
z∈
n
αz=
αz 1 |
αz 2 |
⋮ |
αz n |
(
6
)Since we make use of the zero vector in definitions and proofs, we define the zero vector in Definition 6.2.2.4.
Definition 6.2.2.4 We define the zero vector in in (7).
n
o=
0+0i |
0+0i |
⋮ |
0+0i |
(
7
)Since we define the binary operations of this vector space over the field of complex numbers, we inherit the properties of a field. This inheritance is used in all the proofs that () obeys the axioms of a vector field.
n
◼
Proof of the commutative axiom under addition
Let , where as shown in (8). (This last statement means that all the components are element of and there are number of them in each vector and ).
u,v∈V
V=
n
n
u
v
u=v=
T
(…)
u
1
u
2
u
n
T
(…)
v
1
v
2
v
n
(
8
)Note that both vectors are in the same -space. We have defined vector addition as component-wise addition. So, we have (9), where we use the usual symbol for vector addition in . Note that and are elements of the field , with .
n
+
n
u
i
v
i
i={1,2,…,n}
u+v=v+u=
T
(++…+)
u
1
v
1
u
2
v
2
u
n
v
n
T
(++…+)
v
1
u
1
v
2
u
2
v
n
u
n
(
9
)Since all components ,∈, we inherit additive commutativity from this field.
u
i
v
i
◼
Proof of the closure axiom under addition
We do this proof second as we have defined vector addition above. The resultant object of is clearly another vector in as per the definition of a vector in -space.
u+v=
T
(++…+)
u
1
v
1
u
2
v
2
u
n
v
n
n
n
◼
Associative axiom under addition
This proof will follow a argument to that of the commutative axiom proof above, where we write out all the component additions and inherit the associative axioms of the field .
◼
Proof of the additive identity and that it is unique in
n
Note that uniqueness is not part of the axioms of a vector space. We prove it here as it is a theorem in its own right.
First we show that , in (10).
o+u=u,∀u∈V
u=o=u+o===u
T
(…)
u
1
u
2
u
n
T
(00…0)
T
(+0+0…+0)
u
1
u
2
u
n
T
(…)
u
1
u
2
u
n
(
10
)The proof also inherits from the properties of addition with the additive identity, , in . Next, we have to show that is unique. This can be done through proof by contradiction.
0
o
Assume to the contrary that there are two unique additive identities, . Then we can state (11) by the axiom of the additive identity.
o,∈V
*
o
∀u∈
n
u+o=uu+=u
*
o
(
11
)Since the equations have in common we state (12).
u
u+o=u+
*
o
(
12
)Now we employ our definitions of vector addition and additive associativity and add the additive inverse of to both sides, shown in (13).
u
(-u+u)+o=(-u+u)+
*
o
(
13
)By the definition of vector addition we have (14).
o=
*
o
(
14
)This is a contradiction (to the original assumptions that the two inverses are not unique) and our original assumption is false. Hence the additive identity of is unique.
n
◼
Proof of the additive inverse and that it is unique in
n
Uniqueness is once again not part of the axioms of a vector space, but we prove it here as well, since it is also a theorem in its own right.
The additive inverse of any is (determined by the definition of vector addition and inheritance from the field of complex numbers). Since all ∈ we have the following, shown in (15).
v∈
n
-v
v
i
v=-v=v+(-v)===
T
(…)
v
1
v
2
v
n
T
(--…-)
v
1
v
2
v
n
T
(--…-)
v
1
v
1
v
2
v
2
v
n
v
n
T
(00…0)
o
n
(
15
)The proof of uniqueness is by contradiction and assumes two distinct additive inverses and , shown in (16).
(-v)
*
(-v)
v+(-v)=ov+=ov+(-v)=v++v+(-v)=+v+(+v)+(-v)=(+v)+o+(-v)=o+(-v)=
*
(-v)
*
(-v)
*
(-v)
*
(-v)
*
(-v)
*
(-v)
*
(-v)
*
(-v)
*
(-v)
*
(-v)
(
16
)We invoke the false status of our original assumption again.
Vectors in are indeed members of the the mathematical construct of a vector space. There are many other members of this structure.
n
6.2.3 All 2×2 matrices
6.2.3 All matrices
2×2
The set of all square matrices, , is a vector space. In (17), we list such matrices , , , where ,,∈, and (the zero matrix).
A
2
A
B
C
a
ij
b
ij
c
ij
O
A=
,B=
,C=
,O=
a 11 | a 12 |
a 21 | a 22 |
b 11 | b 12 |
b 13 | b 14 |
c 11 | c 12 |
c 13 | c 14 |
0 | 0 |
0 | 0 |
(
17
)The additive inverses of is given in (18).
A
-A=
- a 11 | - a 12 |
- a 21 | - a 22 |
(
18
)If we consider , the we can show that all the axioms of a vector space are satisfied, shown in (15), where we inherit the properties of fileds and follow the definition of a square matrix.
c,,∈
c
1
c
2
2×2
A+B=
+
=
A+B=
+
=
=
=B+A(A+B)+C=
+
=
A+(B+C)=
+
=
A+O=
+
=
=
AA+(-A)=
+
=
=
=OcA=c
=
(+)A=
+
=A+A(A)=()A1A=1
=
=
=A
a 11 | a 12 |
a 21 | a 22 |
b 11 | b 12 |
b 13 | b 14 |
a 11 b 11 | a 12 b 12 |
a 21 b 13 | a 22 b 14 |
a 11 | a 12 |
a 21 | a 22 |
b 11 | b 12 |
b 13 | b 14 |
a 11 b 11 | a 12 b 12 |
a 21 b 13 | a 22 b 14 |
b 11 a 11 | b 12 a 12 |
b 13 a 21 | b 14 a 22 |
a 11 b 11 | a 12 b 12 |
a 21 b 13 | a 22 b 14 |
c 11 | c 12 |
c 13 | c 14 |
a 11 b 11 c 11 | a 12 b 12 c 12 |
a 21 b 13 c 21 | a 22 b 14 c 22 |
a 11 | a 12 |
a 21 | a 22 |
b 11 c 11 | b 12 c 12 |
b 13 c 21 | b 14 c 22 |
a 11 b 11 c 11 | a 12 b 12 c 12 |
a 21 b 13 c 21 | a 22 b 14 c 22 |
a 11 | a 12 |
a 21 | a 22 |
0 | 0 |
0 | 0 |
a 11 | a 12 |
a 21 | a 22 |
a 11 | a 12 |
a 21 | a 22 |
a 11 | a 12 |
a 21 | a 22 |
- a 11 | - a 12 |
- a 21 | - a 22 |
a 11 a 11 | a 12 a 12 |
a 21 a 13 | a 22 a 14 |
0 | 0 |
0 | 0 |
a 11 | a 12 |
a 21 | a 22 |
ca 11 | ca 12 |
ca 21 | ca 22 |
c
1
c
2
c 1 a 11 | c 1 a 12 |
c 1 a 21 | c 1 a 22 |
c 2 a 11 | c 2 a 12 |
c 2 a 21 | c 2 a 22 |
c
1
c
2
c
1
c
2
c
1
c
2
a 11 | a 12 |
a 21 | a 22 |
1× a 11 | 1× a 12 |
1× a 21 | 1× a 22 |
a 11 | a 12 |
a 21 | a 22 |
(
19
)6.2.5 A set that is not a vector space
6.2.5 A set that is not a vector space
We note that the last axiom of vector spaces is not satisfied, as shown in (22).
We note that the definition of the elements of the set and the specific binary operations of addition and multiplication determine if the set with the two binary operations are indeed vector spaces.
6.2.6 The vector space of real-valued polynomials
6.2.6 The vector space of real-valued polynomials
Addition is defined in (24).
If is the set of polynomials over the field of real numbers then all the axioms of a vector space are satisfied, shown in (26-30).
6.2.7 The vector space of real-valued functions over the reals on an open interval
6.2.7 The vector space of real-valued functions over the reals on an open interval
All the axioms of a vector space apply to the real-valued functions and the proofs once again follow similar arguments as before.
The zero vector space
The zero vector space
The set that only consists of the zero vector (with the binary operations of vector addition and vector multiplication) is also a vector space.
6.3 Vector subspaces
6.3 Vector subspaces
A vector space can be contained within another vector space. The former vector space is then called a subspace. A subspace contains a subset of the elements of the original vector space and with the same binary operations.
This means that we only have to show closure under addition, the existence of a unique additive identity, additive inverses, and closure under scalar-vector multiplication.
Examples of subspaces follow below.
6.3.1 A plane in 3-space through the origin
6.3.1 A plane in 3-space through the origin
The same arguments hold for line in 3-space, through the origin.
6.3.2 Subspaces of square matrices
6.3.2 Subspaces of square matrices
6.3.4 Solution space of homogeneous systems
6.3.4 Solution space of homogeneous systems
Since the zero vector is in the solution space, then (35) holds and we have a vector subspace.
6.4 Unit vectors
6.4 Unit vectors
The standard unit vectors in (37) are orthonormal vectors.
6.5 Linear combinations
6.5 Linear combinations
Any vector can be expressed as a linear combination of the standard unit vectors. An example is shown in (35).
6.6 Spanning
6.6 Spanning
We investigate examples of spanning.
6.7 Linear independence
6.7 Linear independence
Definition 6.7.1 Vectors are linearly independent if not one of them can be written as the linear combination of the others.
We determine whether the three vectors below in (42) are linearly independent as an example problem.
The variable augmentedMatrix is created below to hold the augmented matrix of the linear system in (43).
The RowReduce function returns the reduced row-echelon form of the augmented matrix.
A good example of a linearly independent set is the standard unit vectors, (44).
The only linear combination of these vector that give the zero vector is the zero vector, (41).
Consider now the dependent vectors in (46).
We look at the reduced row-echelon form of the matrix of coefficients.
For the vector of coefficients, we have a solution shown in (48).
In Figure 6.7.1 we see the a linear combination of the vectors end back at the origin.
We conclude that the vectors are not linearly independent.
This system has non-trivial solutions and is therefor dependent.
6.8 Basis and dimensions
6.8 Basis and dimensions
Definition 6.8.1 Any set of vectors that are linearly independent and span a space are a basis for that space. These vectors are known as basis vectors.
6.8.1 Basis vectors
6.8.1 Basis vectors
Now we see if we can linearly combine them to produce the vector in (53).
It has an inverse and therefor a non-zero determinant.
6.8.2 Basis matrices
6.8.2 Basis matrices
Linear independence is shown in (57).
6.8.3 Basis of subspaces
6.8.3 Basis of subspaces
6.8.4 Dimension of a vector space
6.8.4 Dimension of a vector space
As an example, we find the basis and the dimension of the solution space of the homogeneous linear system in (58).
We rewrite this to calculate the solutions, shown in (59).
The reduced row-echelon form of the augmented matrix is calculated.
The result is expanded in (60).
6.8.5 Fundamental theorems in vector spaces
6.8.5 Fundamental theorems in vector spaces