A non-void subset M of a vector space V, is a vector space if and only if rx + sy is in M for any r,s in R and any x,y in M. |
Well, if M is a vector space then, from the postulates, rx and sy are in M and therefore rx + sy is in M .
Part 2: we prove that
If rx + sy is in M for any r,s in R and any x,y in M,
Then M is a vector space.
Example 1
V is the vector space of all polynomials in x. M is the set of all the polynomials in x
of second degree or lower.
We investigate whether or not M is a subspace of V. To this end we choose r, s
at random in R and ax^{2}+bx+c and dx^{2}+ex+f are random elements in M.
M is subspace of V <=> r(ax^{2}+bx+c) + s(dx^{2}+ex+f) is in M <=> (ra+sd)x^{2} + (rb+ se)x + (rc+sf) is in MSince the last claim is true, M is a subspace of V.
Example 2
V is the vector space of all polynomials in x. M is the set of all the polynomials in x
divisible by (x-2).
We investigate whether or not M is a subspace of V. To this end we choose r, s
at random in R and (x-2)p(x) and (x-2)q(x) are random elements in M.
M is subspace of V <=> r(x-2)p(x) + s.(x-2)q(x) is in M <=> (x-2).( r.p(x) + s.q(x) ) is in MSince the last claim is true, M is a subspace of V.
Example 3
V is the vector space of all couples of real numbers.
M = { (x,y) | x,y in R and x + y = 1}
One can now follow the method as in the previous examples, but it can shorter.
Since (0,0) is not in M, M is not a vector space. So, M is not a subspace of V.
Example 4
V is the vector space of all 2x2 matrices. M is the set of all the regular 2x2 matrices.
We investigate whether or not M is a subspace of V. To this end we choose r, s
at random in R and A and B are regular 2x2 matrices.
M is subspace of V <=> r A + s B is in MThe last claim is false because for r=s=0 we have 0.A + 0.B = 0 and the 0-matrix is not in M.
The intersection of two subspaces M and N of V, is itself a subspace of V. |
Example
V is the vector space of all polynomials in x.
M is the subspace of all the polynomials in x divisible by (x-2).
N is the subspace of all the polynomials in x divisible by (x-1).
The intersection I of M and N is the set of all the polynomials in x divisible by
(x-1)(x-2). I is a subspace of V.
Example
R^{3} is the vector space of ordered triples of real numbers.
From this space, we take the vectors (1,2,0) and (0,0,1).
The following three vectors are linear combinations of (1,2,0) and (0,0,1).
3.(1,2,0) + 4.(0,0,1) = (3,6,4) -1.(1,2,0) + 0.(0,0,1) = (-1,-2,0) 0.(1,2,0) + 0.(0,0,1) = (0,0,0)But the vector (1,1,0) is not a linear combinations of (1,2,0) and (0,0,1), because there are no real numbers r and s such that r.(1,2,0) + s.(0,0,1) = (1,1,0).
Conclusion:
Each subspace M of V contains all linear combinations of
an arbitrarily set of its vectors.
Since each vector space containing the vectors a,b,c,d ... ,l must contain each linear combination of these vectors, M is the 'smallest' vector space generated by D.
Conclusions and definitions:
All linear combinations of vectors of D = { a,b,c,d ... ,l }
generate a vector space M. The elements of D are called, generators of M. M is called the vector space spanned by D. The vector space spanned by the vectors a,b,c,d ... ,l is denoted span(D). It is the smallest vector space containing the set D. |
Example 2
V is the vector space of all row matrices [a,b,c] with a,b,c in R.
D = { [1,1,1] }.
M = span(D) = { [r,r,r] | r in R}
3 [1,0,0,0] + 4 [0,1,0,0] = [3,4,0,0] is in M, So,
M = span( [1,0,0,0] , [0,1,0,0] , [2,3,1,0] , [2,4,1,0] , [3,4,0,0] )
[2,4,1,0] is a linear combination of the other vectors in D because
[2,4,1,0] = 0 [1,0,0,0] + [0,1,0,0] + [2,3,1,0]. So,
M = span( { [1,0,0,0] , [0,1,0,0] , [2,3,1,0] }
M = span([17,0,0,0] , [0,1,0,0] , [2,3,1,0] , [2,4,1,0] }
M = span( [1,0,0,0] , [0,1,0,0] - [2,3,1,0] , [2,3,1,0] , [2,4,1,0] }
A set D of vectors is called dependent, if there is at least one vector in D, that can be written as a linear combination of the other vectors of D. A set of one vector is called dependent, if and only if it is the vector 0. |
A set D of vectors is called independent, if and only if that set is not a dependent set. Such set is called a free set of vectors. |
Take a set D = {a,b,c,..,l} of (more than one) vectors from a vector space V.
That set D is linear dependent if and only if there is a suitable set of real numbers r,s,t, ... ,z , not all zero, such that ra + sb + tc + ... + zl = 0 |
b = ra + tc + ... + zl <=> ra + (-1)b + tc + ... + zl = 0 So, there is a suitable set of real numbers r, s = -1, t, ... , z , not all zero, such that ra + sb + tc + ... + zl = 0Part 2 :
Example:
De vectors [-12, 17, 14] [10, -7, 8] [-11, 3, 24] are linear dependent <=> There are real numbers r,s,t not all zero, such that r [-12, 17, 14] + s [10, -7, 8] + t [-11, 3, 24] = [0,0,0] <=> There are real numbers r,s,t not all zero, such that -12 r + 10 s - 11t = 0 17 r - 7 s + 3t = 0 14 r + 8 s + 24t = 0 <=> (Relying on the theory of systems) | -12 10 -11 | | 17 -7 3 | = 0 | 14 8 24 | <=> -3930 = 0Since the last statement is false, the three vectors are not linear dependent.
Take a set D = {a,b,c,..,l} of (more than one) vectors from a vector space V. That set D is linear independent if and only if ra + sb + tc + ... + zl = 0 => r = s = t = ... = z = 0 |
The vectors [-12, 17, 14] [10, -7, 8] [-11, 3, 24] are linear independent <=> If r [-12, 17, 14] + s [10, -7, 8] + t [-11, 3, 24] = [0,0,0] then we must have r = s = t = 0 <=> The system -12 r + 10 s - 11t = 0 17 r - 7 s + 3t = 0 14 r + 8 s + 24t = 0 has only the trivial solution r = s = t = 0 (Relying on the theory of systems) <=> | -12 10 -11 | | 17 -7 3 | is different from 0 | 14 8 24 | <=> -3930 is different from 0Since the last statement is true, the three vectors [-12, 17, 14] [10, -7, 8] [-11, 3, 24] are linear independent.
Take an ordered set D = {a,b,c,..,l} of (more than one) vectors from a vector space V. That set D is linear dependent if and only if there is at least one vector, who is a linear combination of the PREVIOUS vectors in D. |
Example:
We investigate whether the vectors [1, 0, -13] , [2, 17, 0] , [12, 7, 0] are independent.
The second vector is not a linear combination of the previous one.
So, the first two vectors are independent.
The third vector is not a linear combination of the previous vectors because
r[1, 0, -13] + s [2, 17, 0] = [12, 7, 0] <=> r + 2s = 12 17s = 7 -13r = 0We see immediately that there is no solution for that system. The three vectors are linear independent. It is a free set.
Example:
D = { (1,3) , (2,1) , (4,7) } is a generating set of the vector space R^{2}.
(4,7) is a linear combination of the other vectors because (4,7) = 2.(1,3) + (2,1).
We remove (4,7).
(2,1) is no multiple of (1,3).
D' = { (1,3) , (2,1) } is free and generating set. It is a basis of R^{2}.
We write co(v) = (r,s,t, ... ,z) or v(r,s,t, ... ,z).
Mind the difference:
v(2,4,-3) is the vector v with coordinates (2,4,-3).
But in v = (2,4,-3) means that the vector v is equal to the vector (2,4,-3)
Example
Let V = the vector space R^{3}. An obvious basis is ((1,0,0) , (0,1,0) , (0,0,1)).
Dim(V) = 3. Each basis consists of three vectors but three random vectors
do not always constitute a basis.
Take the three vectors ( (2+m,m,m) (n,2,n) (2, 1, -4) ).
We search for the necessary and sufficient condition for m and n such that
these three vectors are not a basis of R^{3}.
(2+m,m,m) (n,2,n) (2, 1, -4) are not a basis <=> (2+m,m,m) (n,2,n) (2, 1, -4) are linear dependent <=> There is an r,s and t , not all zero, such that r(2+m,m,m) + s(m,2,n) + t(2, 1, -4) = 0 <=> The following system has a solution different from (0,0,0) (2+m)r + n s + 2t = 0 m r + 2 s + t = 0 m r + n s - 4t = 0 <=> | 2+m n 2 | | m 2 1 | = 0 | m n -4 | <=> 6 m n - 12 m - 2 n -16 = 0The three vectors are not a basis of V if and only if the latter condition is fulfilled.
Example
(1,2,5) and (-1,1,3) are two vectors of R^{3}. Choose another vector from R^{3} such that the three vectors form a basis of R^{3}.
We try with the simple vector (1,0,0). As in the previous example we have:
(1,0,0) (1,2,5) and (-1,1,3) constitute a basis <=> (1,0,0) (1,2,5) and (-1,1,3) are linear independent <=> ... <=> | 1 0 0 | | 1 2 5 | is not zero |-1 1 3 |When we unfold the determinant following the first row, we see immediately that the determinant is 1. So, (1,0,0) (1,2,5) and (-1,1,3) constitute a basis.
Example: We'll find the row space of a matrix A and the unique basis for that row space.
[1 0 2 3] A = [1 2 0 1] [1 0 1 0]The rank of A is 3. There are 3 linear independent rows.
Now, we simplify the matrix A, by means of row transformations until we reach the canonic matrix.
[1 0 2 3] [1 2 0 1] [1 0 1 0] R2-R1 [1 0 2 3] [0 2 -2 -2] [1 0 1 0] (1/2)R2 [1 0 2 3] [0 1 -1 -1] [1 0 1 0] R3 - R1 [1 0 2 3] [0 1 -1 -1] [0 0 -1 -3] (-1)R3 [1 0 2 3] [0 1 -1 -1] [0 0 1 3] R1-2.R3 [1 0 0 -3] [0 1 -1 -1] [0 0 1 3] R2 + R3 [1 0 0 -3] [0 1 0 2] [0 0 1 3] Now, we have the unique basis of the row space. ((1 0 0 -3),(0 1 0 2),(0 0 1 3))
Exercise:
Take the matrix A from previous example and find the unique basis of the column space.
Example:
Find the m-values such that: (the dimension of the column space of A) = 3.
[ m 1 2 ] A = [ 3 1 0 ] [ 1 -2 1 ]
The dimension of the column space of A = rank A.
The rank A is 3 if and only if (the determinant of A is not zero).
The determinant of A is m-17.
Conclusion: (the dimension of the column space of A) = 3 if and only if m is different from 17.
s = x' (au + bv + cw) + y' (du + ev + fw) + z' (gu + hv + iw) = (ax' + dy' + gz')u + (bx' + ey' + hz')v + (cx' + fy' + iz')w but from above we have also s = xu + yv + zwTherefore, the relation between the coordinates is
x = ax' + dy' + gz' y = bx' + ey' + hz' z = cx' + fy' + iz'These relations can be written in matrix notation.
[x] [a d g] [x'] [y] = [b e h].[y'] [z] [c f i] [z'] [a d g] [b e h] is called the transformation matrix. [c f i]The columns of the transformation matrix are the coordinates of the new basis relative to the old basis.
Example 1:
Say V is the vector space of the ordinary three dimensional space. In that space we
take a standard basis e_{1}, e_{2}, e_{3}.
They are the unit vectors along x-axis, y-axis and z-axis.
We rotate the three basis vectors, around the z-axis, by an angle of 90 degrees.
Thus we get a new basis u_{1}, u_{2}, u_{3}.
The link between old and new basis is
u_{1} = e_{2} co(u_{1}) = (0,1,0) u_{2} = - e_{1} co(u_{2}) = (-1,0,0) u_{3} = e_{3} co(u_{3}) = (0,0,1) The transformation matrix is [0 -1 0] [1 0 0] [0 0 1] (x,y,z) are the coordinates of a vector v relative to the old basis. (x',y',z') are the coordinates of the vector v relative to the new basis. The connection is [x] [0 -1 0] [x'] [y] = [1 0 0].[y'] [z] [0 0 1] [z']Example 2:
Now we pass to a new basis (1,x,3x^{2}-1,5x^{3}-3x)
The coordinates of the new basis vectors, relative to the old basis, are
(1,0,0,0) (0,1,0,0) (-1,0,3,0) (0,-3,0,5) The transformation matrix M is [1 0 -1 0] [0 1 0 -3] [0 0 3 0] [0 0 0 5]Now, we calculate the coordinates (x',y',z',t') of the vector 6+2x-x^{2}+4x^{3} relative to the new basis.
[6] [x'] [2] [y'] [-1] = M [z'] [4] [t'] <=> [x'] [6] [y'] = M^{-1} [2] [z'] [-1] [t'] [4] <=> [x'] [ 1, 0, 1/3, 0 ] [6] [y'] = [ 0, 1, 0, 3/5 ] [2] [z'] [ 0, 0, 1/3, 0 ] [-1] [t'] [ 0, 0, 0, 1/5 ] [4] [x'] [ 17/3 ] [y'] = [ 22/5 ] [z'] [ -1/3 ] [t'] [ 4/5 ]( 17/3 , 22/5 , -1/3, 4/5 ) are the coordinates of the vector 6+2x-x^{2}+4x^{3} relative to the new basis.
/ 2x + 3y - z + t = 0 \ x - y + 2z - t = 0This is a system of the second kind.
x = -z + (2/5)t y = z - (3/5)t The set of solutions can we written as (-z + (2/5)t , z - (3/5)t , z , t ) with z and t in R <=> z(-1,1,1,0) + t(2/5,-3/5,0,1) with z and t in RHence, all solutions are linear combinations of the linear independent vectors (-1,1,1,0) and (2/5,-3/5,0,1).
AX' + AX" = B <=> A(X' + X") = B <=> X' + X" is a solution of AX = B.Conclusion:
Furthermore:
If X' is a fixed solution of AX = B then AX' = B .
If X" is a arbitrary solution of AX = B then AX" = B .
Then,
AX" - AX' = 0 <=> A(X" - X') = 0 <=> X" - X' is a solution of AX = 0 <=> X" = X' + (a solution of AX = 0)Conclusion:
If we have a fixed solution of AX = B and we add to this solution all the solutions of the corresponding homogeneous system one after another, then we get all solutions AX = B. |
Example:
/ 2x + 3y - z + t = 0 \ x - y + 2z - t = 0 Above we have seen that the solutions are z(-1,1,1,0) + t(2/5,-3/5,0,1) with z and t in R / 2x + 3y - z + t = 5 \ x - y + 2z - t = 0 has a solution (1,1,0,0) All solutions of the last system are (1,1,0,0) + z(-1,1,1,0) + t(2/5,-3/5,0,1) with z and t in R
We define the sum of A and B as the set
{ a + b with a in A and b in B } We write this sum as A + B. |
Proof:
For all a_{1} and a_{2} in A and all b_{1} and b_{2} in B and all r, s in R we have
r(a_{1} + b_{1}) + s(a_{2} + b_{2}) = (r a_{1} + s a_{2}) + (r b_{1} + s b_{2}) is in A + B.
Example :
In the space R^{3} A = span{ (3,2,1) } B = span{ (2,1,4) ; (0,1,3) } Investigate if A+B is a direct sum |
Say r,s,t are real numbers, then each vector in space A is of the form r.(3,2,1) and each vector in space B is of the form s.(2,1,4) + t.(0,1,3) . For each common vector, there is a suitable r,s,t such that r.(3,2,1) = s.(2,1,4) + t.(0,1,3) <=> r.(3,2,1) - s.(2,1,4) - t.(0,1,3) = (0,0,0) <=> / 3r - 2s = 0 | 2r - s - t = 0 \ r - 4s -3t = 0 Since |3 -2 0| |2 -1 -1| is not 0, |1 -4 -3| the previous system has only the solution r = s = t = 0. The vector (0,0,0) is the only common vector of A and B. Thus, A+B is a direct sum.
If A + B is a direct sum, then each vector v in A+B can be written, in just one way, as the sum of an element of A and an element of B. |
Proof:
Suppose v = a_{1} + b_{1} = a_{2} + b_{2} with a_{i} in A and b_{i} in B. Then a_{1} - a_{2} = b_{2} - b_{1} and a_{1} - a_{2} is in A and b_{2} - b_{1} is in B Therefore a_{1} - a_{2} = b_{2} - b_{1} is a common element of A and B. But the only common element is 0. So, a_{1} - a_{2} = 0 and b_{2} - b_{1} = 0 and a_{1} = a_{2} and b_{2} = b_{1}
Example :
In the space R^{3} we define A = span{ (3,2,1) } B = span{ (2,1,4) ; (0,1,3) } We know from previous example that A+B is a direct sum. |
3r + 2s + 0t = 7 2r + 1s + 1t = 3 1r + 4s + 3t = 6This is a Cramer system. It has exactly 1 solution.
(7,3,6) = (3,2,1) + (4,1,5)
Say that vector space V is the direct sum of A and B, then
A is the supplementary vector space of B relative to V. B is the supplementary vector space of A relative to V. A and B are supplementary vector spaces relative to V. |
Say V is the direct sum of the spaces M and N.
If {a,b,c,..,l } is a basis of M and {a',b',c',..,l' } is a basis of N, |
Thus each vector v = ra + sb + tc + ... + zl + r'a' + s'b' + t'c' + ... + z'l'
Therefore the set {a,b,c,..,l,a',b',c',..,l'} generates V.
If ra + sb + tc + ... + zl + r'a' + s'b' + t'c' + ... + z'l' = 0 ,
then ra + sb + tc + ... + zl is a vector m of M and
r'a' + s'b' + t'c' + ... + z'l' is a vector n of N.
From a previous theorem we know that we can write the vector 0 in just one way, as the sum of an element of M and an element of N. That way is 0 = 0 + 0 with 0 in M and 0 in N.
From this we see that necessarily m = 0 and n = 0 and thus
ra + sb + tc + ... + zl = 0 and
r'a' + s'b' + t'c' + ... + z'l' = 0
Since all vectors in these expressions are linear independent, it is necessarily that all coefficients are 0 and from this we know that the generating vectors {a,b,c,..,l,a',b',c',..,l'} are linear independent.
If {a,b,c,..,l } is a basis of M and {a',b',c',..,l' } is a basis of N,
and {a,b,c,..,l,a',b',c',..,l' } are linear independent, then M+N is a direct sum. |
ra + sb + tc + ... + zl = r'a' + s'b' + t'c' + ... + z'l' <=> ra + sb + tc + ... + zl - r'a' - s'b' - t'c' - ... - z'l' = 0Since all vectors are linear independent, all coefficients must be 0. The only common vector is 0.
If {a,b,c,..,l } is a basis of M and {a',b',c',..,l' } is a basis of N, then
M+N is a direct sum <=> {a,b,c,..,l,a',b',c',..,l' } are linear independent |
Example :
V is the vector space of all polynomials in x with real coefficients. M is the subspace of V with basis (12, 7x + 2x^{2}) N is the subspace of V with basis (3x, 4x + 5x^{2}) Examine whether M+N is a direct sum. |
M+N is a direct sum <=> The vectors (12, 7x + 2x^{2}, 3x, 4x + 5x^{2}) are linear independentSo, all linear combinations of the four vectors constitute a subspace of the space of all second degree or lower polynomials.
Then v = m + n .
Now we can define the transformation
p: V --> V : v --> m We define this transformation as the projection of V on M relative to NExample :
V is the space of all polynomials with a degree not greater than 3.
We define two supplementary subspaces
M = span { 1, x }
N = span { x^{2}, x^{3} }
Each vector of V is the sum of exactly one vector of M and of N.
e.g. 2x^{3} - x^{2} + 4x - 7 = (2x^{3} - x^{2}) + (4x - 7)
Say p is the projection of V on M relative to N, then
p(2x^{3} - x^{2} + 4x - 7 ) = 4x - 7Say q is the projection of V on N relative to M, then
q(2x^{3} - x^{2} + 4x - 7 ) = 2x^{3} - x^{2}To create the matrix of a projection see chapter: linear transformations.
h : V --> V : v --> r.vWe say that h is a similarity transformation of V with factor r.
Important special values of r are 0, 1 and -1.
Now we define the transformation
s : V --> V : v --> m - nWe say that s is the reflection of V in M relative to the N.
This definition is a generalization of the ordinary reflection in a plane. Indeed, if you take the ordinary vectors in a plane and if M and N are one dimensional supplementary subspaces, then you'll see that with the previous definition, s becomes the ordinary reflection in M relative to the direction given by N.
Take V = R^{4}. M = span{(0,1,3,1);(1,0,-1,0)} N = span{(0,0,0,1);(3,2,1,0)}It is easy to show that M and N have only the vector 0 in common. (This is left as an exercise.) So, M and N are supplementary subspaces.
Now we'll calculate the image of the reflection of vector v = (4,3,3,1) in M relative to N.
First we write v as the sum of exactly one vector m of M and n of N.
(4,3,3,1) = x.(0,1,3,1) + y.(1,0,-1,0) + z.(0,0,0,1) + t.(3,2,1,0)The solution of this system gives x = 1; y = 1; z = 0; t = 1. The unique representation of v is
(4,3,3,1) = (1,1,2,1) + (3,2,1,0)The image of the reflection of vector v = (4,3,3,1) in M relative to N is vector v' =
(1,1,2,1) - (3,2,1,0) = (-2,-1,1,1)To create the matrix of a reflection see chapter: linear transformations.