Prove that the vectors are linearly independent. Linear dependence and linear independence of a system of vectors

Let L – linear space over the field R . Let А1, а2, …, аn (*) finite system of vectors from L . Vector IN = a1× A1 + a2× A2 + … + an× An (16) is called Linear combination of vectors ( *), or they say that it is a vector IN linearly expressed through a system of vectors (*).

Definition 14. The system of vectors (*) is called Linearly dependent , if and only if there exists a non-zero set of coefficients a1, a2, … , an such that a1× A1 + a2× A2 + … + an× An = 0. If a1× A1 + a2× A2 + … + an× An = 0 Û a1 = a2 = … = an = 0, then the system (*) is called Linearly independent.

Properties of linear dependence and independence.

10. If a system of vectors contains a zero vector, then it is linearly dependent.

Indeed, if in the system (*) the vector A1 = 0, That's 1× 0 + 0× A2 + … + 0 × Аn = 0 .

20. If a system of vectors contains two proportional vectors, then it is linearly dependent.

Let A1 = L×a2. Then 1× A1 –l× A2 + 0× A3 + … + 0× A N= 0.

30. A finite system of vectors (*) for n ³ 2 is linearly dependent if and only if at least one of its vectors is a linear combination of the remaining vectors of this system.

Þ Let (*) be linearly dependent. Then there is a non-zero set of coefficients a1, a2, …, an, for which a1× A1 + a2× A2 + … + an× An = 0 . Without loss of generality, we can assume that a1 ¹ 0. Then there exists A1 = ×a2× A2 + … + ×an× A N. So, vector A1 is a linear combination of the remaining vectors.

Ü Let one of the vectors (*) be a linear combination of the others. We can assume that this is the first vector, i.e. A1 = B2 A2+ … + bn A N, Hence (–1)× A1 + b2 A2+ … + bn A N= 0 , i.e. (*) is linearly dependent.

Comment. Using the last property, we can define the linear dependence and independence of an infinite system of vectors.

Definition 15. Vector system А1, а2, …, аn , … (**) is called Linearly dependent, If at least one of its vectors is a linear combination of some finite number of other vectors. Otherwise, the system (**) is called Linearly independent.

40. A finite system of vectors is linearly independent if and only if none of its vectors can be linearly expressed in terms of its remaining vectors.

50. If a system of vectors is linearly independent, then any of its subsystems is also linearly independent.

60. If some subsystem of a given system of vectors is linearly dependent, then the entire system is also linearly dependent.

Let two systems of vectors be given А1, а2, …, аn , … (16) and В1, В2, …, Вs, … (17). If each vector of system (16) can be represented as a linear combination of a finite number of vectors of system (17), then system (17) is said to be linearly expressed through system (16).

Definition 16. The two vector systems are called Equivalent , if each of them is linearly expressed through the other.

Theorem 9 (basic linear dependence theorem).

Let it be – two finite systems of vectors from L . If the first system is linearly independent and linearly expressed through the second, then N£s.

Proof. Let's pretend that N> S. According to the conditions of the theorem

(21)

Since the system is linearly independent, equality (18) Û X1=x2=…=xN= 0. Let us substitute here the expressions of the vectors: …+=0 (19). Hence (20). Conditions (18), (19) and (20) are obviously equivalent. But (18) is satisfied only when X1=x2=…=xN= 0. Let's find when equality (20) is true. If all its coefficients are zero, then it is obviously true. Equating them to zero, we obtain system (21). Since this system has zero , then it

joint Since the number of equations is greater than the number of unknowns, the system has infinitely many solutions. Therefore, it has a non-zero X10, x20, …, xN0. For these values, equality (18) will be true, which contradicts the fact that the system of vectors is linearly independent. So our assumption is wrong. Hence, N£s.

Consequence. If two equivalent systems of vectors are finite and linearly independent, then they contain the same number of vectors.

Definition 17. The vector system is called Maximal linearly independent system of vectors Linear space L , if it is linearly independent, but when adding to it any vector from L , not included in this system, it becomes linearly dependent.

Theorem 10. Any two finite maximal linearly independent systems of vectors from L Contain the same number of vectors.

Proof follows from the fact that any two maximal linearly independent systems of vectors are equivalent .

It is easy to prove that any linearly independent system of space vectors L can be expanded to a maximal linearly independent system of vectors in this space.

Examples:

1. In the set of all collinear geometric vectors, any system consisting of one nonzero vector is maximally linearly independent.

2. In the set of all coplanar geometric vectors, any two non-collinear vectors constitute a maximal linearly independent system.

3. In the set of all possible geometric vectors of three-dimensional Euclidean space, any system of three non-coplanar vectors is maximally linearly independent.

4. In the set of all polynomials, degrees are not higher than N With real (complex) coefficients, a system of polynomials 1, x, x2, … , xn Is maximally linearly independent.

5. In the set of all polynomials with real (complex) coefficients, examples of a maximal linearly independent system are

A) 1, x, x2, ... , xn, ... ;

b) 1, (1 – x), (1 – x)2, … , (1 – x)N, ...

6. Set of dimension matrices M´ N is a linear space (check this). An example of a maximal linearly independent system in this space is the matrix system E11= , E12 =, …, EMn = .

Let a system of vectors be given C1, c2, …, cf (*). The subsystem of vectors from (*) is called Maximum linearly independent Subsystem Systems ( *) , if it is linearly independent, but when adding any other vector of this system to it, it becomes linearly dependent. If the system (*) is finite, then any of its maximal linearly independent subsystems contains the same number of vectors. (Prove it yourself). The number of vectors in the maximum linearly independent subsystem of the system (*) is called Rank This system. Obviously, equivalent systems of vectors have the same ranks.

Definition. Linear combination of vectors a 1 , ..., a n with coefficients x 1 , ..., x n is called a vector

x 1 a 1 + ... + x n a n .

trivial, if all coefficients x 1 , ..., x n are equal to zero.

Definition. The linear combination x 1 a 1 + ... + x n a n is called non-trivial, if at least one of the coefficients x 1, ..., x n is not equal to zero.

linearly independent, if there is no non-trivial combination of these vectors equal to the zero vector.

That is, the vectors a 1, ..., a n are linearly independent if x 1 a 1 + ... + x n a n = 0 if and only if x 1 = 0, ..., x n = 0.

Definition. The vectors a 1, ..., a n are called linearly dependent, if there is a non-trivial combination of these vectors equal to the zero vector.

Properties of linearly dependent vectors:

    For 2 and 3 dimensional vectors.

    Two linearly dependent vectors are collinear. (Collinear vectors are linearly dependent.)

    For 3-dimensional vectors.

    Three linearly dependent vectors are coplanar. (Three coplanar vectors are linearly dependent.)

  • For n-dimensional vectors.

    n + 1 vectors are always linearly dependent.

Examples of problems on linear dependence and linear independence of vectors:

Example 1. Check whether the vectors a = (3; 4; 5), b = (-3; 0; 5), c = (4; 4; 4), d = (3; 4; 0) are linearly independent.

Solution:

The vectors will be linearly dependent, since the dimension of the vectors is less than the number of vectors.

Example 2. Check whether the vectors a = (1; 1; 1), b = (1; 2; 0), c = (0; -1; 1) are linearly independent.

Solution:

x 1 + x 2 = 0
x 1 + 2x 2 - x 3 = 0
x 1 + x 3 = 0
1 1 0 0 ~
1 2 -1 0
1 0 1 0
~ 1 1 0 0 ~ 1 1 0 0 ~
1 - 1 2 - 1 -1 - 0 0 - 0 0 1 -1 0
1 - 1 0 - 1 1 - 0 0 - 0 0 -1 1 0

subtract the second from the first line; add a second line to the third line:

~ 1 - 0 1 - 1 0 - (-1) 0 - 0 ~ 1 0 1 0
0 1 -1 0 0 1 -1 0
0 + 0 -1 + 1 1 + (-1) 0 + 0 0 0 0 0

This solution shows that the system has many solutions, that is, there is a non-zero combination of values ​​of the numbers x 1, x 2, x 3 such that the linear combination of vectors a, b, c is equal to the zero vector, for example:

A+b+c=0

and this means the vectors a, b, c are linearly dependent.

Answer: vectors a, b, c are linearly dependent.

Example 3. Check whether the vectors a = (1; 1; 1), b = (1; 2; 0), c = (0; -1; 2) are linearly independent.

Solution: Let us find the values ​​of the coefficients at which the linear combination of these vectors will be equal to the zero vector.

x 1 a + x 2 b + x 3 c 1 = 0

This vector equation can be written as a system of linear equations

x 1 + x 2 = 0
x 1 + 2x 2 - x 3 = 0
x 1 + 2x 3 = 0

Let's solve this system using the Gauss method

1 1 0 0 ~
1 2 -1 0
1 0 2 0

subtract the first from the second line; subtract the first from the third line:

~ 1 1 0 0 ~ 1 1 0 0 ~
1 - 1 2 - 1 -1 - 0 0 - 0 0 1 -1 0
1 - 1 0 - 1 2 - 0 0 - 0 0 -1 2 0

subtract the second from the first line; add a second to the third line.

In other words, the linear dependence of a group of vectors means that there is a vector among them that can be represented by a linear combination of other vectors in this group.

Let's say. Then

Therefore the vector x linearly dependent of the vectors of this group.

Vectors x, y, ..., z are called linear independent vectors, if it follows from equality (0) that

α=β= ...= γ=0.

That is, groups of vectors are linearly independent if no vector can be represented by a linear combination of other vectors in this group.

Determination of linear dependence of vectors

Let m string vectors of order n be given:

Having made a Gaussian exception, we reduce matrix (2) to upper triangular form. The elements of the last column change only when the rows are rearranged. After m elimination steps we get:

Where i 1 , i 2 , ..., i m - row indices obtained from possible row rearrangement. Considering the resulting rows from the row indices, we exclude those that correspond to the zero row vector. The remaining lines form linearly independent vectors. Note that when composing matrix (2), by changing the sequence of row vectors, you can obtain another group of linearly independent vectors. But the subspace that both these groups of vectors form coincides.

Introduced by us linear operations on vectors make it possible to create various expressions for vector quantities and transform them using the properties set for these operations.

Based on a given set of vectors a 1, ..., a n, you can create an expression of the form

where a 1, ..., and n are arbitrary real numbers. This expression is called linear combination of vectors a 1, ..., a n. The numbers α i, i = 1, n, represent linear combination coefficients. A set of vectors is also called system of vectors.

In connection with the introduced concept of a linear combination of vectors, the problem arises of describing a set of vectors that can be written as a linear combination of a given system of vectors a 1, ..., a n. In addition, there are natural questions about the conditions under which there is a representation of a vector in the form of a linear combination, and about the uniqueness of such a representation.

Definition 2.1. Vectors a 1, ..., and n are called linearly dependent, if there is a set of coefficients α 1 , ... , α n such that

α 1 a 1 + ... + α n а n = 0 (2.2)

and at least one of these coefficients is non-zero. If the specified set of coefficients does not exist, then the vectors are called linearly independent.

If α 1 = ... = α n = 0, then, obviously, α 1 a 1 + ... + α n a n = 0. With this in mind, we can say this: vectors a 1, ..., and n are linearly independent if it follows from equality (2.2) that all coefficients α 1 , ... , α n are equal to zero.

The following theorem explains why the new concept is called the term "dependence" (or "independence"), and provides a simple criterion for linear dependence.

Theorem 2.1. In order for the vectors a 1, ..., and n, n > 1, to be linearly dependent, it is necessary and sufficient that one of them is a linear combination of the others.

◄ Necessity. Let us assume that the vectors a 1, ..., and n are linearly dependent. According to Definition 2.1 of linear dependence, in equality (2.2) on the left there is at least one non-zero coefficient, for example α 1. Leaving the first term on the left side of the equality, we move the rest to the right side, changing their signs, as usual. Dividing the resulting equality by α 1, we get

a 1 =-α 2 /α 1 ⋅ a 2 - ... - α n /α 1 ⋅ a n

those. representation of vector a 1 as a linear combination of the remaining vectors a 2, ..., a n.

Adequacy. Let, for example, the first vector a 1 can be represented as a linear combination of the remaining vectors: a 1 = β 2 a 2 + ... + β n a n. Transferring all terms from the right side to the left, we obtain a 1 - β 2 a 2 - ... - β n a n = 0, i.e. a linear combination of vectors a 1, ..., a n with coefficients α 1 = 1, α 2 = - β 2, ..., α n = - β n, equal to zero vector. In this linear combination, not all coefficients are zero. According to Definition 2.1, the vectors a 1, ..., and n are linearly dependent.

The definition and criterion for linear dependence are formulated to imply the presence of two or more vectors. However, we can also talk about a linear dependence of one vector. To realize this possibility, instead of “vectors are linearly dependent,” you need to say “the system of vectors is linearly dependent.” It is easy to see that the expression “a system of one vector is linearly dependent” means that this single vector is zero (in a linear combination there is only one coefficient, and it should not be equal to zero).

The concept of linear dependence has a simple geometric interpretation. The following three statements clarify this interpretation.

Theorem 2.2. Two vectors are linearly dependent if and only if they collinear.

◄ If vectors a and b are linearly dependent, then one of them, for example a, is expressed through the other, i.e. a = λb for some real number λ. According to definition 1.7 works vectors per number, vectors a and b are collinear.

Let now vectors a and b be collinear. If they are both zero, then it is obvious that they are linearly dependent, since any linear combination of them is equal to the zero vector. Let one of these vectors not be equal to 0, for example vector b. Let us denote by λ the ratio of vector lengths: λ = |a|/|b|. Collinear vectors can be unidirectional or oppositely directed. In the latter case, we change the sign of λ. Then, checking Definition 1.7, we are convinced that a = λb. According to Theorem 2.1, vectors a and b are linearly dependent.

Remark 2.1. In the case of two vectors, taking into account the criterion of linear dependence, the proven theorem can be reformulated as follows: two vectors are collinear if and only if one of them is represented as the product of the other by a number. This is a convenient criterion for the collinearity of two vectors.

Theorem 2.3. Three vectors are linearly dependent if and only if they coplanar.

◄ If three vectors a, b, c are linearly dependent, then, according to Theorem 2.1, one of them, for example a, is a linear combination of the others: a = βb + γc. Let us combine the origins of vectors b and c at point A. Then the vectors βb, γс will have a common origin at point A and along according to the parallelogram rule, their sum is those. vector a will be a vector with origin A and the end, which is the vertex of a parallelogram built on component vectors. Thus, all vectors lie in the same plane, i.e., coplanar.

Let vectors a, b, c be coplanar. If one of these vectors is zero, then it is obvious that it will be a linear combination of the others. It is enough to take all coefficients of a linear combination equal to zero. Therefore, we can assume that all three vectors are not zero. Compatible started of these vectors at a common point O. Let their ends be points A, B, C, respectively (Fig. 2.1). Through point C we draw lines parallel to lines passing through pairs of points O, A and O, B. Designating the points of intersection as A" and B", we obtain a parallelogram OA"CB", therefore, OC" = OA" + OB". Vector OA" and the non-zero vector a = OA are collinear, and therefore the first of them can be obtained by multiplying the second by a real number α:OA" = αOA. Similarly, OB" = βOB, β ∈ R. As a result, we obtain that OC" = α OA. + βOB, i.e. vector c is a linear combination of vectors a and b. According to Theorem 2.1, vectors a, b, c are linearly dependent.

Theorem 2.4. Any four vectors are linearly dependent.

◄ We carry out the proof according to the same scheme as in Theorem 2.3. Consider arbitrary four vectors a, b, c and d. If one of the four vectors is zero, or among them there are two collinear vectors, or three of the four vectors are coplanar, then these four vectors are linearly dependent. For example, if vectors a and b are collinear, then we can make their linear combination αa + βb = 0 with non-zero coefficients, and then add the remaining two vectors to this combination, taking zeros as coefficients. We obtain a linear combination of four vectors equal to 0, in which there are non-zero coefficients.

Thus, we can assume that among the selected four vectors, no vectors are zero, no two are collinear, and no three are coplanar. Let us choose point O as their common beginning. Then the ends of the vectors a, b, c, d will be some points A, B, C, D (Fig. 2.2). Through point D we draw three planes parallel to the planes OBC, OCA, OAB, and let A", B", C" be the points of intersection of these planes with the straight lines OA, OB, OS, respectively. We obtain a parallelepiped OA" C "B" C" B"DA", and the vectors a, b, c lie on its edges emerging from the vertex O. Since the quadrilateral OC"DC" is a parallelogram, then OD = OC" + OC" In turn, the segment OC" is a diagonal. parallelogram OA"C"B", so OC" = OA" + OB" and OD = OA" + OB" + OC" .

It remains to note that the pairs of vectors OA ≠ 0 and OA" , OB ≠ 0 and OB" , OC ≠ 0 and OC" are collinear, and, therefore, it is possible to select the coefficients α, β, γ so that OA" = αOA , OB" = βOB and OC" = γOC. We finally get OD = αOA + βOB + γOC. Consequently, the OD vector is expressed through the other three vectors, and all four vectors, according to Theorem 2.1, are linearly dependent.

a 1 = { 3, 5, 1 , 4 }, a 2 = { –2, 1, -5 , -7 }, a 3 = { -1, –2, 0, –1 }.

Solution. We are looking for a general solution to the system of equations

a 1 x 1 + a 2 x 2 + a 3 x 3 = Θ

Gauss method. To do this, we write this homogeneous system in coordinates:

System Matrix

The allowed system has the form: (r A = 2, n= 3). The system is cooperative and uncertain. Its general solution ( x 2 – free variable): x 3 = 13x 2 ; 3x 1 – 2x 2 – 13x 2 = 0 => x 1 = 5x 2 => X o = . The presence of a non-zero particular solution, for example, indicates that the vectors a 1 , a 2 , a 3 linearly dependent.

Example 2.

Find out whether a given system of vectors is linearly dependent or linearly independent:

1. a 1 = { -20, -15, - 4 }, a 2 = { –7, -2, -4 }, a 3 = { 3, –1, –2 }.

Solution. Consider a homogeneous system of equations a 1 x 1 + a 2 x 2 + a 3 x 3 = Θ

or in expanded form (by coordinates)

The system is homogeneous. If it is non-degenerate, then it has a unique solution. In the case of a homogeneous system, there is a zero (trivial) solution. This means that in this case the system of vectors is independent. If the system is degenerate, then it has non-zero solutions and, therefore, it is dependent.

We check the system for degeneracy:

= –80 – 28 + 180 – 48 + 80 – 210 = – 106 ≠ 0.

The system is non-degenerate and, thus, the vectors a 1 , a 2 , a 3 linearly independent.

Tasks. Find out whether a given system of vectors is linearly dependent or linearly independent:

1. a 1 = { -4, 2, 8 }, a 2 = { 14, -7, -28 }.

2. a 1 = { 2, -1, 3, 5 }, a 2 = { 6, -3, 3, 15 }.

3. a 1 = { -7, 5, 19 }, a 2 = { -5, 7 , -7 }, a 3 = { -8, 7, 14 }.

4. a 1 = { 1, 2, -2 }, a 2 = { 0, -1, 4 }, a 3 = { 2, -3, 3 }.

5. a 1 = { 1, 8 , -1 }, a 2 = { -2, 3, 3 }, a 3 = { 4, -11, 9 }.

6. a 1 = { 1, 2 , 3 }, a 2 = { 2, -1 , 1 }, a 3 = { 1, 3, 4 }.

7. a 1 = {0, 1, 1 , 0}, a 2 = {1, 1 , 3, 1}, a 3 = {1, 3, 5, 1}, a 4 = {0, 1, 1, -2}.

8. a 1 = {-1, 7, 1 , -2}, a 2 = {2, 3 , 2, 1}, a 3 = {4, 4, 4, -3}, a 4 = {1, 6, -11, 1}.

9. Prove that a system of vectors will be linearly dependent if it contains:

a) two equal vectors;

b) two proportional vectors.