Linear dependence and independence. Linear dependence and independence, properties, study of a system of vectors for linear dependence, examples and solutions Linear independence theorem

Lemma 1 : If in a matrix of size n n at least one row (column) is zero, then the rows (columns) of the matrix are linearly dependent.

Proof: Let the first line be zero, then

Where a 1 0. That's what was required.

Definition: A matrix whose elements located below the main diagonal are equal to zero is called triangular:

and ij = 0, i>j.

Lemma 2: The determinant of a triangular matrix is ​​equal to the product of the elements of the main diagonal.

The proof is easy to carry out by induction on the dimension of the matrix.

Theorem on linear independence of vectors.

A)Necessity: linearly dependent D=0 .

Proof: Let them be linearly dependent, j=,

that is, there are a j , not all equal to zero, j= , What a 1 A 1 + a 2 A 2 + ... a n A n = , A j – matrix columns A. Let, for example, a n¹0.

We have a j * = a j / a n , j£ n-1a 1 * A 1 + a 2 * A 2 + ... a n -1 * A n -1 + A n = .

Let's replace the last column of the matrix A on

A n * = a 1 * A 1 + a 2 * A 2 + ... a n -1 A n -1 + A n = .

According to the above-proven property of the determinant (it will not change if another column multiplied by a number is added to any column in the matrix), the determinant of the new matrix is ​​equal to the determinant of the original one. But in the new matrix one column is zero, which means that, expanding the determinant over this column, we get D=0, Q.E.D.

b)Adequacy: Size matrix n nwith linearly independent rows It can always be reduced to a triangular form using transformations that do not change the absolute value of the determinant. Moreover, from the independence of the rows of the original matrix, it follows that its determinant is equal to zero.

1. If in the size matrix n n with linearly independent rows element a 11 is equal to zero, then the column whose element a 1 j ¹ 0. According to Lemma 1, such an element exists. The determinant of the transformed matrix may differ from the determinant of the original matrix only in sign.

2. From lines with numbers i>1 subtract the first line multiplied by the fraction a i 1 /a 11. Moreover, in the first column of rows with numbers i>1 will result in zero elements.

3. Let's start calculating the determinant of the resulting matrix by decomposing over the first column. Since all elements in it except the first are equal to zero,

D new = a 11 new (-1) 1+1 D 11 new,

Where d 11 new is the determinant of a matrix of smaller size.

Next, to calculate the determinant D 11 repeat steps 1, 2, 3 until the last determinant turns out to be the determinant of the size matrix 1 1. Since step 1 only changes the sign of the determinant of the matrix being transformed, and step 2 does not change the value of the determinant at all, then, up to the sign, we will ultimately obtain the determinant of the original matrix. In this case, since due to the linear independence of the rows of the original matrix, step 1 is always satisfied, all elements of the main diagonal will turn out to be unequal to zero. Thus, the final determinant, according to the described algorithm, is equal to the product of non-zero elements on the main diagonal. Therefore, the determinant of the original matrix is ​​not equal to zero. Q.E.D.


Appendix 2

The following give several criteria for linear dependence and, accordingly, linear independence of vector systems.

Theorem. (Necessary and sufficient condition for linear dependence of vectors.)

A system of vectors is dependent if and only if one of the vectors of the system is linearly expressed through the others of this system.

Proof. Necessity. Let the system be linearly dependent. Then, by definition, it represents the zero vector non-trivially, i.e. there is a non-trivial combination of this system of vectors equal to the zero vector:

where at least one of the coefficients of this linear combination is not equal to zero. Let , .

Let's divide both sides of the previous equality by this non-zero coefficient (i.e. multiply by:

Let's denote: , where .

those. one of the vectors of the system is linearly expressed through the others of this system, etc.

Adequacy. Let one of the vectors of the system be linearly expressed through other vectors of this system:

Let's move the vector to the right of this equality:

Since the coefficient of the vector is equal to , then we have a nontrivial representation of zero by a system of vectors, which means that this system of vectors is linearly dependent, etc.

The theorem has been proven.

Consequence.

1. A system of vectors in a vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system.

2. A system of vectors containing a zero vector or two equal vectors is linearly dependent.

Proof.

1) Necessity. Let the system be linearly independent. Let us assume the opposite and there is a vector of the system that is linearly expressed through other vectors of this system. Then, according to the theorem, the system is linearly dependent and we arrive at a contradiction.

Adequacy. Let none of the vectors of the system be expressed in terms of the others. Let's assume the opposite. Let the system be linearly dependent, but then it follows from the theorem that there is a vector of the system that is linearly expressed through other vectors of this system, and we again come to a contradiction.

2a) Let the system contain a zero vector. Let us assume for definiteness that the vector :. Then the equality is obvious

those. one of the vectors of the system is linearly expressed through the other vectors of this system. It follows from the theorem that such a system of vectors is linearly dependent, etc.

Note that this fact can be proven directly from a linearly dependent system of vectors.

Since , the following equality is obvious

This is a non-trivial representation of the zero vector, which means the system is linearly dependent.

2b) Let the system have two equal vectors. Let for . Then the equality is obvious

Those. the first vector is linearly expressed through the remaining vectors of the same system. It follows from the theorem that this system is linearly dependent, etc.

Similar to the previous one, this statement can be proved directly by the definition of a linearly dependent system. Then this system represents the zero vector non-trivially

whence follows the linear dependence of the system.

The theorem has been proven.

Consequence. A system consisting of one vector is linearly independent if and only if this vector is nonzero.

Let L – linear space over the field R . Let А1, а2, …, аn (*) finite system of vectors from L . Vector IN = a1× A1 +a2× A2 + … + an× An (16) is called Linear combination of vectors ( *), or they say that the vector IN linearly expressed through a system of vectors (*).

Definition 14. The system of vectors (*) is called Linearly dependent , if and only if there exists a non-zero set of coefficients a1, a2, … , an such that a1× A1 +a2× A2 + … + an× An = 0. If a1× A1 +a2× A2 + … + an× An = 0 Û a1 = a2 = … = an = 0, then the system (*) is called Linearly independent.

Properties of linear dependence and independence.

10. If a system of vectors contains a zero vector, then it is linearly dependent.

Indeed, if in the system (*) the vector A1 = 0, That's 1× 0 + 0× A2 + … + 0 × Аn = 0 .

20. If a system of vectors contains two proportional vectors, then it is linearly dependent.

Let A1 = L×a2. Then 1× A1 –l× A2 + 0× A3 + … + 0× A N= 0.

30. A finite system of vectors (*) for n ³ 2 is linearly dependent if and only if at least one of its vectors is a linear combination of the remaining vectors of this system.

Þ Let (*) be linearly dependent. Then there is a non-zero set of coefficients a1, a2, …, an, for which a1× A1 +a2× A2 + … + an× An = 0 . Without loss of generality, we can assume that a1 ¹ 0. Then there exists A1 = ×a2× A2 + … + ×an× A N. So, vector A1 is a linear combination of the remaining vectors.

Ü Let one of the vectors (*) be a linear combination of the others. We can assume that this is the first vector, i.e. A1 = B2 A2+ … + bn A N, Hence (–1)× A1 + b2 A2+ … + bn A N= 0 , i.e. (*) is linearly dependent.

Comment. Using the last property, we can define the linear dependence and independence of an infinite system of vectors.

Definition 15. Vector system А1, а2, …, аn , … (**) is called Linearly dependent, If at least one of its vectors is a linear combination of some finite number of other vectors. Otherwise, the system (**) is called Linearly independent.

40. A finite system of vectors is linearly independent if and only if none of its vectors can be linearly expressed in terms of its remaining vectors.

50. If a system of vectors is linearly independent, then any of its subsystems is also linearly independent.

60. If some subsystem of a given system of vectors is linearly dependent, then the entire system is also linearly dependent.

Let two systems of vectors be given А1, а2, …, аn , … (16) and В1, В2, …, Вs, … (17). If each vector of system (16) can be represented as a linear combination of a finite number of vectors of system (17), then system (17) is said to be linearly expressed through system (16).

Definition 16. The two vector systems are called Equivalent , if each of them is linearly expressed through the other.

Theorem 9 (basic linear dependence theorem).

Let it be – two finite systems of vectors from L . If the first system is linearly independent and linearly expressed through the second, then N£s.

Proof. Let's pretend that N> S. According to the conditions of the theorem

(21)

Since the system is linearly independent, equality (18) Û X1=x2=…=xN= 0. Let us substitute here the expressions of the vectors: …+=0 (19). Hence (20). Conditions (18), (19) and (20) are obviously equivalent. But (18) is satisfied only when X1=x2=…=xN= 0. Let's find when equality (20) is true. If all its coefficients are zero, then it is obviously true. Equating them to zero, we obtain system (21). Since this system has zero , then it

joint Since the number of equations is greater than the number of unknowns, the system has infinitely many solutions. Therefore, it has a non-zero X10, x20, …, xN0. For these values, equality (18) will be true, which contradicts the fact that the system of vectors is linearly independent. So our assumption is wrong. Hence, N£s.

Consequence. If two equivalent systems of vectors are finite and linearly independent, then they contain the same number of vectors.

Definition 17. The vector system is called Maximal linearly independent system of vectors Linear space L , if it is linearly independent, but when adding to it any vector from L , not included in this system, it becomes linearly dependent.

Theorem 10. Any two finite maximal linearly independent systems of vectors from L Contain the same number of vectors.

Proof follows from the fact that any two maximal linearly independent systems of vectors are equivalent .

It is easy to prove that any linearly independent system of space vectors L can be expanded to a maximal linearly independent system of vectors in this space.

Examples:

1. In the set of all collinear geometric vectors, any system consisting of one nonzero vector is maximally linearly independent.

2. In the set of all coplanar geometric vectors, any two non-collinear vectors constitute a maximal linearly independent system.

3. In the set of all possible geometric vectors of three-dimensional Euclidean space, any system of three non-coplanar vectors is maximally linearly independent.

4. In the set of all polynomials, degrees are not higher than N With real (complex) coefficients, a system of polynomials 1, x, x2, … , xn Is maximally linearly independent.

5. In the set of all polynomials with real (complex) coefficients, examples of a maximal linearly independent system are

A) 1, x, x2, ... , xn, ... ;

b) 1, (1 – x), (1 – x)2, … , (1 – x)N, ...

6. Set of dimension matrices M´ N is a linear space (check this). An example of a maximal linearly independent system in this space is the matrix system E11= , E12 =, …, EMn = .

Let a system of vectors be given C1, c2, …, cf (*). The subsystem of vectors from (*) is called Maximum linearly independent Subsystem Systems ( *) , if it is linearly independent, but when adding any other vector of this system to it, it becomes linearly dependent. If the system (*) is finite, then any of its maximal linearly independent subsystems contains the same number of vectors. (Prove it yourself). The number of vectors in the maximum linearly independent subsystem of the system (*) is called Rank This system. Obviously, equivalent systems of vectors have the same ranks.

Theorem 1. (On the linear independence of orthogonal vectors). Let Then the system of vectors is linearly independent.

Let's make a linear combination ∑λ i x i =0 and consider the scalar product (x j , ∑λ i x i)=λ j ||x j || 2 =0, but ||x j || 2 ≠0⇒λ j =0.

Definition 1. Vector systemor (e i ,e j)=δ ij - Kronecker symbol, called orthonormal (ONS).

Definition 2. For an arbitrary element x of an arbitrary infinite-dimensional Euclidean space and an arbitrary orthonormal system of elements, the Fourier series of an element x over the system is called a formally composed infinite sum (series) of the form , in which the real numbers λ i are called the Fourier coefficients of the element x in the system, where λ i =(x,e i).

A comment. (Naturally, the question arises about the convergence of this series. To study this issue, we fix an arbitrary number n and find out what distinguishes the nth partial sum of the Fourier series from any other linear combination of the first n elements of the orthonormal system.)

Theorem 2. For any fixed number n, among all sums of the form, the nth partial sum of the Fourier series of the element has the smallest deviation from the element x according to the norm of a given Euclidean space

Taking into account the orthonormality of the system and the definition of the Fourier coefficient, we can write


The minimum of this expression is achieved at c i =λ i, since in this case the non-negative first sum on the right side always vanishes, and the remaining terms do not depend on c i.

Example. Consider the trigonometric system

in the space of all Riemann integrable functions f(x) on the segment [-π,π]. It is easy to check that this is an ONS, and then the Fourier Series of the function f(x) has the form where .

A comment. (The trigonometric Fourier series is usually written in the form Then )

An arbitrary ONS in an infinite-dimensional Euclidean space without additional assumptions, generally speaking, is not a basis of this space. On an intuitive level, without giving strict definitions, we will describe the essence of the matter. In an arbitrary infinite-dimensional Euclidean space E, consider the ONS, where (e i ,e j)=δ ij is the Kronecker symbol. Let M be a subspace of Euclidean space, and k=M ⊥ be a subspace orthogonal to M such that Euclidean space E=M+M ⊥ . The projection of the vector x∈E onto the subspace M is the vector ∈M, where


We will look for those values ​​of the expansion coefficients α k for which the residual (squared residual) h 2 =||x-|| 2 will be the minimum:

h 2 =||x-|| 2 =(x-,x-)=(x-∑α k e k ,x-∑α k e k)=(x,x)-2∑α k (x,e k)+(∑α k e k ,∑α k e k)= ||x|| 2 -2∑α k (x,e k)+∑α k 2 +∑(x,e k) 2 -∑(x,e k) 2 =||x|| 2 +∑(α k -(x,e k)) 2 -∑(x,e k) 2 .

It is clear that this expression will take a minimum value at α k =0, which is trivial, and at α k =(x,e k). Then ρ min =||x|| 2 -∑α k 2 ≥0. From here we obtain Bessel’s inequality ∑α k 2 ||x|| 2. At ρ=0 an orthonormal system of vectors (ONS) is called a complete orthonormal system in the Steklov sense (PONS). From here we can obtain the Steklov-Parseval equality ∑α k 2 =||x|| 2 - the “Pythagorean theorem” for infinite-dimensional Euclidean spaces that are complete in the sense of Steklov. Now it would be necessary to prove that in order for any vector in space to be uniquely represented in the form of a Fourier series converging to it, it is necessary and sufficient for the Steklov-Parseval equality to hold. The system of vectors pic=""> ONB forms? system of vectors Consider for the partial sum of the series Then like the tail of a convergent series. Thus, the system of vectors is a PONS and forms an ONB.

Example. Trigonometric system

in the space of all Riemann-integrable functions f(x) on the segment [-π,π] is a PONS and forms an ONB.

The functions are called linearly independent, If

(only a trivial linear combination of functions that is identically equal to zero is allowed). In contrast to the linear independence of vectors, here the linear combination is identical to zero, and not equality. This is understandable, since the equality of a linear combination to zero must be satisfied for any value of the argument.

The functions are called linearly dependent, if there is a non-zero set of constants (not all constants are equal to zero) such that (there is a non-trivial linear combination of functions identically equal to zero).

Theorem.In order for functions to be linearly dependent, it is necessary and sufficient that any of them is linearly expressed through the others (represented as their linear combination).

Prove this theorem yourself; it is proven in the same way as a similar theorem about the linear dependence of vectors.

Vronsky's determinant.

The Wronski determinant for functions is introduced as a determinant whose columns are the derivatives of these functions from zero (the functions themselves) to the n-1st order.

.

Theorem. If the functions are linearly dependent, then

Proof. Since the functions are linearly dependent, then any of them is linearly expressed through the others, for example,

The identity can be differentiated, so

Then the first column of the Wronski determinant is linearly expressed through the remaining columns, so the Wronski determinant is identically equal to zero.

Theorem.In order for the solutions of a linear homogeneous differential equation of the nth order to be linearly dependent, it is necessary and sufficient that.

Proof. Necessity follows from the previous theorem.

Adequacy. Let's fix some point. Since , the columns of the determinant calculated at this point are linearly dependent vectors.

, that the relations are satisfied

Since a linear combination of solutions to a linear homogeneous equation is its solution, we can introduce a solution of the form

A linear combination of solutions with the same coefficients.

Note that this solution satisfies zero initial conditions; this follows from the system of equations written above. But the trivial solution of a linear homogeneous equation also satisfies the same zero initial conditions. Therefore, from Cauchy’s theorem it follows that the introduced solution is identically equal to the trivial one, therefore,

therefore the solutions are linearly dependent.

Consequence.If the Wronski determinant, built on solutions of a linear homogeneous equation, vanishes at least at one point, then it is identically equal to zero.

Proof. If , then the solutions are linearly dependent, therefore, .

Theorem.1. For linear dependence of solutions it is necessary and sufficient(or ).

2. For linear independence of solutions it is necessary and sufficient.

Proof. The first statement follows from the theorem and corollary proved above. The second statement can be easily proven by contradiction.

Let the solutions be linearly independent. If , then the solutions are linearly dependent. Contradiction. Hence, .

Let . If the solutions are linearly dependent, then , hence, a contradiction. Therefore, the solutions are linearly independent.

Consequence.The vanishing of the Wronski determinant at least at one point is a criterion for the linear dependence of solutions to a linear homogeneous equation.

The difference between the Wronski determinant and zero is a criterion for the linear independence of solutions to a linear homogeneous equation.

Theorem.The dimension of the space of solutions to a linear homogeneous equation of the nth order is equal to n.

Proof.

a) Let us show that there exist n linearly independent solutions to a linear homogeneous differential equation of the nth order. Let's consider solutions , satisfying the following initial conditions:

...........................................................

Such solutions exist. Indeed, according to Cauchy’s theorem, through the point passes through a single integral curve—the solution. Through the point the solution passes through the point

- solution, through a point - solution .

These solutions are linearly independent, since .

b) Let us show that any solution to a linear homogeneous equation is linearly expressed through these solutions (is their linear combination).

Let's consider two solutions. One - an arbitrary solution with initial conditions . Fair ratio