§7. Examples of linear spaces

L- intersection M all subspaces L containing X .

Linear shell is also called subspace generated X. Usually denoted . It is also said that the linear shell stretched over a bunch of X .

Properties

see also

Links


Wikimedia Foundation. 2010.

  • Jangar
  • Payment balance

See what “Linear shell” is in other dictionaries:

    LINEAR SHELL- intersection M of all subspaces containing the set of vector space E. Moreover, Mnaz. also a subspace generated by A. M. I. Voitsekhovsky... Mathematical Encyclopedia

    Linear shell vectors

    Linear shell vectors- a set of linear combinations of these vectors ∑αiаi with all possible coefficients (α1, …, αn) … Economic and mathematical dictionary

    linear shell vectors- A set of linear combinations of these vectors??iai with all possible coefficients (?1, …, ?n). Topics economics EN linear hull …

    linear algebra- Mathematical discipline, a section of algebra, containing, in particular, the theory of linear equations, matrices and determinants, as well as the theory of vector (linear) spaces. Linear relationship “relationship of the form: a1x1 + a2x2 + … +… … Technical Translator's Guide

    Linear dependence- “relationship of the form: a1x1 + a2x2 + … + anxn = 0, where a1, a2, …, an are numbers, at least one of which is non-zero; x1, x2, ..., xn are certain mathematical objects for which addition operations are defined ... Economic and mathematical dictionary

    Shell- see Linear shell... Economic and mathematical dictionary

    Linear dependence

    Linear combination- Linear space, or vector space, is the main object of study of linear algebra. Contents 1 Definition 2 Simplest properties 3 Related definitions and properties ... Wikipedia

    LINEAR GROUP is a group of linear transformations of a vector space V of finite dimension n over some body K. The choice of a basis in the space V realizes the linear group as a group of non-degenerate square matrices of degree n over the body K. This establishes an isomorphism... Mathematical Encyclopedia

Books

  • Linear algebra. Textbook and workshop for open source education Buy for 1471 UAH (Ukraine only)
  • Linear algebra. Textbook and workshop for academic bachelor's degree, Kremer N.Sh.. This textbook includes a number of new concepts and additional questions, such as the norm of a matrix, the method of complementing a basis, isomorphism of linear spaces, linear subspaces, linear...

Let be a system of vectors from vector space V over the field P.

Definition 2: Linear shell L systems A is the set of all linear combinations of vectors of the system A. Designation L(A).

It can be shown that for any two systems A And B,

A linearly expressed through B if and only if . (1)

A equivalent B then and only when L(A)=L(B). (2)

The proof follows from the previous property

3 The linear span of any system of vectors is a subspace of the space V.

Proof

Take any two vectors and from L(A), having the following expansions in vectors from A: . Let's check the feasibility of conditions 1) and 2) of the criterion:

Since it is a linear combination of system vectors A.

Since it is also a linear combination of system vectors A.

Let's now consider the matrix. Linear span of matrix rows A is called the row space of the matrix and is denoted Lr(A). Linear span of matrix columns A is called a column space and is denoted Lc(A). Please note that when the row and column space of the matrix A are subspaces of different arithmetic spaces P n And Pm respectively. Using statement (2), we can come to the following conclusion:

Theorem 3: If one matrix is ​​obtained from another by a chain of elementary transformations, then the row spaces of such matrices coincide.

Sum and intersection of subspaces

Let L And M- two subspaces of space R.

Amount L+M is called a set of vectors x+y , Where x L And y M. Obviously, any linear combination of vectors from L+M belongs L+M, hence L+M is a subspace of the space R(may coincide with space R).

By crossing LM subspaces L And M is the set of vectors that simultaneously belong to subspaces L And M(can only consist of a zero vector).

Theorem 6.1. Sum of dimensions of arbitrary subspaces L And M finite-dimensional linear space R equal to the dimension of the sum of these subspaces and the dimension of the intersection of these subspaces:

dim L+dim M=dim(L+M)+dim(L∩M).

Proof. Let's denote F=L+M And G=L∩M. Let G g-dimensional subspace. Let us choose a basis in it. Because GL And GM, therefore basis G can be added to the basis L and to the base M. Let the basis of the subspace L and let the basis of the subspace M. Let us show that the vectors

(6.1)constitute the basis F=L+M. In order for vectors (6.1) to form the basis of the space F they must be linearly independent and any vector of space F can be represented by a linear combination of vectors (6.1).



Let us prove the linear independence of vectors (6.1). Let the zero vector of space F is represented by a linear combination of vectors (6.1) with some coefficients:

The left side of (6.3) is the subspace vector L, and the right side is the subspace vector M. Therefore the vector

(6.4)belongs to the subspace G=L∩M. On the other hand, the vector v can be represented by a linear combination of basis vectors of the subspace G:

(6.5) From equations (6.4) and (6.5) we have:

But vectors are the basis of subspace M, therefore they are linearly independent and . Then (6.2) will take the form:

Due to the linear independence of the basis of the subspace L we have:

Since all the coefficients in equation (6.2) turned out to be zero, then the vectors

linearly independent. But any vector z from F(by definition of the sum of subspaces) can be represented by the sum x+y , Where x L,y M. In its turn x is represented by a linear combination of vectors a y - linear combination of vectors. Therefore, vectors (6.10) originate the subspace F. We found that vectors (6.10) form a basis F=L+M.

Studying subspace bases L And M and subspace basis F=L+M(6.10), we have: dim L=g+l, dim M=g+m, dim (L+M)=g+l+m. Hence:

dim L+dim M−dim(L∩M)=dim(L+M).

Direct sum of subspaces

Definition 6.2. Space F represents the direct sum of subspaces L And M, if each vector x space F can only be represented as a sum x=y+z , Where y ∈L and z M.



The direct amount is indicated LM. They say that if F=LM, That F decomposes into the direct sum of its subspaces L And M.

Theorem 6.2. In order to n-dimensional space R was the direct sum of subspaces L And M, it is enough for the intersection L And M contained only the zero element and that the dimension R was equal to the sum of the dimensions of the subspaces L And M.

Proof. Let us choose some basis in the subspace L and some basis in the subspace M. Let us prove that

(6.11) is the basis of the space R. According to the conditions of the theorem, the dimension of space Rn equal to the sum of subspaces L And M (n=l+m). It is enough to prove the linear independence of elements (6.11). Let the zero vector of space R is represented by a linear combination of vectors (6.11) with some coefficients:

(6.13) Since the left side of (6.13) is a vector of the subspace L, and the right side is the subspace vector M And LM=0 , That

(6.14) But vectors are the bases of subspaces L And M respectively. Therefore they are linearly independent. Then

(6.15) It was established that (6.12) is valid only under the condition (6.15), and this proves the linear independence of the vectors (6.11). Therefore they form a basis in R.

Let x∈R. Let's expand it according to basis (6.11):

(6.16)From (6.16) we have:

(6.18)From (6.17) and (6.18) it follows that any vector from R can be represented as a sum of vectors x 1 ∈L And x 2 ∈M. It remains to prove that this representation is unique. Let, in addition to representation (6.17), there be the following representation:

(6.19)Subtracting (6.19) from (6.17), we obtain

(6.20) Since , and LM=0 , then and . Therefore and. ■

Theorem 8.4 on the dimension of the sum of subspaces. If and are subspaces of a finite-dimensional linear space, then the dimension of the sum of subspaces is equal to the sum of their dimensions without the dimension of their intersection ( Grassmann's formula):

(8.13)

In fact, let be the basis of the intersection . Let us supplement it with an ordered set of vectors up to the basis of the subspace and an ordered set of vectors up to the basis of the subspace. Such an addition is possible by Theorem 8.2. From these three sets of vectors, let's create an ordered set of vectors. Let us show that these vectors are generators of the space. Indeed, any vector of this space is represented as a linear combination of vectors from an ordered set

Hence, . Let us prove that the generators are linearly independent and therefore they are the basis of the space. Indeed, let's make a linear combination of these vectors and equate it to the zero vector: . All coefficients of this expansion are zero: subspaces of a vector space with a bilinear form are the set of all vectors orthogonal to each vector from . This set is a vector subspace, which is usually denoted by .

The article describes the basics of linear algebra: linear space, its properties, the concept of basis, dimensions of space, linear hull, connection between linear spaces and the rank of matrices.

Linear space

A bunch of L called linear space, if for all its elements the operations of adding two elements and multiplying an element by a number satisfying I group Weyl's axioms. The elements of linear space are called vectors. This is a complete definition; More briefly, we can say that a linear space is a set of elements for which the operations of adding two elements and multiplying an element by a number are defined.

Weyl's axioms.

Hermann Weil suggested that in geometry we have two types of objects ( vectors and points), the properties of which are described by the following axioms, which formed the basis of the section linear algebra. It is convenient to divide the axioms into 3 groups.

Group I

  1. for any vectors x and y the equality x+y=y+x is satisfied;
  2. for any vectors x, y and z the equality x+(y+z)=(x+y)+z is satisfied;
  3. there is a vector o such that for any vector x the equality x+o=x holds;
  4. for any vector X there is a vector (-x) such that x+(-x)=o;
  5. for any vector X the equality 1x=x holds;
  6. for any vectors X And at and any number λ the equality λ( X+at)=λ Xat;
  7. for any vector X and any numbers λ and μ the equality holds (λ+μ) XXX;
  8. for any vector X and any numbers λ and μ the equality λ(μ X)=(λμ) X;

Group II

Group I defines the concept linear combination of vectors, linear dependence and linear independence. This allows us to formulate two more axioms:

  1. there are n linearly independent vectors;
  2. any (n+1) vectors are linearly dependent.

For planimetry n=2, for stereometry n=3.

Group III

This group assumes that there is a scalar multiplication operation that assigns a pair of vectors X And at number ( x,y). Wherein:

  1. for any vectors X And at equality holds ( x,y)=(y, x);
  2. for any vectors X , at And z equality holds ( x+y,z)=(x,z)+(y,z);
  3. for any vectors X And at and any number λ the equality (λ x,y)=λ( x,y);
  4. for any vector x the inequality holds ( x, x)≥0, and ( x, x)=0 if and only if X=0.

Properties of linear space

Most properties of linear space are based on Weyl's axioms:

  1. Vector O, the existence of which is guaranteed by Axiom 3, is determined in a unique way;
  2. Vector (- X), whose existence is guaranteed by Axiom 4, is determined in a unique way;
  3. For any two vectors A And b belonging to space L, there is only one vector X, also belonging to space L, which is a solution to the equation a+x=b and called the vector difference b-a.

Definition. Subset L' linear space L called linear subspace space L, if it itself is a linear space in which the sum of vectors and the product of a vector and a number are defined in the same way as in L.

Definition. Linear shell L(x1, x2, x3, …, xk) vectors x1, x2, x3, And xk is called the set of all linear combinations of these vectors. About the linear shell we can say that

-the linear hull is a linear subspace;

– the linear hull is the minimal linear subspace containing the vectors x1, x2, x3, And xk.

Definition. A linear space is called n-dimensional if it satisfies Group II of the Weyl axiom system. The number n is called dimension linear space and write dimL=n.

Basis– any ordered system of n linearly independent vectors of space. The meaning of the basis is that the vectors that make up the basis can be used to describe any vector in the space.

Theorem. Any n linearly independent vectors in the space L form a basis.

Isomorphism.

Definition. Linear spaces L And L' are called isomorphic if such a one-to-one correspondence can be established between their elements x↔x’, What:

  1. If x↔x’, y↔y’, That x+y↔x’+y’;
  2. If x↔x’, then λ x↔λ X'.

This correspondence itself is called isomorphism. Isomorphism allows us to make the following statements:

  • if two spaces are isomorphic, then their dimensions are equal;
  • any two linear spaces over the same field and of the same dimension are isomorphic.

1. Set of polynomials P n (x) degrees no higher n.

2. A bunch of n-term sequences (with term-by-term addition and multiplication by a scalar).

3 . Lots of features C [ A , b ] continuous on [ A, b] and with pointwise addition and multiplication by a scalar.

4. Many functions specified on [ A, b] and vanishing at some fixed interior point c: f (c) = 0 and with pointwise operations of addition and multiplication by a scalar.

5. Set R+, if xyxy, ⊙xx  .

§8. Definition of subspace

Let the set W is a subset of linear space V (WV) and such that

a)  x, yWxyW;

b)  xW,    ⊙ xW.

The operations of addition and multiplication here are the same as in space V(they are called space induced V).

So many W called a subspace of space V.

7 . Subspace W itself is space.

◀ To prove it, it is enough to prove the existence of a neutral element and its opposite. Equalities 0⊙ x=  and (–1)⊙ X = –X prove what is necessary.

A subspace consisting only of a neutral element () and a subspace coinciding with the space itself V, are called trivial subspaces of the space V.

§9. Linear combination of vectors. Linear span of vector system

Let the vectors e 1 ,e 2 , …e nV and  1,  2 , …  n .

Vector x =  1 e 1 +  2 e 2 + … +  n e n = called linear combination of vectors e 1 , e 2 , … , e n with coefficients  1,  2 , …  n .

If all coefficients in a linear combination are equal to zero, then the linear combination called trivial.

Set of all possible linear combinations of vectors
called the linear hull this system of vectors and is denoted:

ℒ(e 1 , e 2 , …, e n) = ℒ
.

8 . ℒ(e 1 , e 2 , …, e n

◀ The correctness of the operations of addition and multiplication by a scalar follows from the fact that ℒ( e 1 , e 2 , …, e n) is a set of all possible linear combinations. The neutral element is a trivial linear combination. For element X=
the opposite is the element - x =
. The axioms that the operations must satisfy are also satisfied. Thus,ℒ( e 1 , e 2 , …, e n) is a linear space.

Any linear space contains, in the general case, an infinite number of other linear spaces (subspaces) - linear shells

In the future we will try to answer the following questions:

When do linear shells of different vector systems consist of the same vectors (i.e., coincide)?

2) What is the minimum number of vectors that defines the same linear span?

3) Is the original space a linear span of some system of vectors?

§10. Complete vector systems

If in space V there is a finite set of vectors
so what,ℒ
V, then the system of vectors
is called a complete system in V, and the space is called finite-dimensional. Thus, the system of vectors e 1 , e 2 , …, e nV called complete in V system, i.e. If

XV   1 ,  2 , …  n such that x =  1 e 1 +  2 e 2 + … +  n e n .

If in space V there is no finite complete system (and a complete one always exists - for example, the set of all vectors of space V), then the space V is called infinite-dimensional.

9 . If
full in V system of vectors and yV, That ( e 1 , e 2 , …, e n , y) is also a complete system.

◀ In linear combinations the coefficient before y take equal to 0.

Let be a system of vectors from . Linear shell vector systems is the set of all linear combinations of vectors of a given system, i.e.

Properties of a linear shell: If , then for and .

The linear shell has the property of being closed with respect to linear operations (the operations of addition and multiplication by a number).

A subset of a space that has the property of being closed with respect to the operations of addition and multiplication by numbers is calledlinear subspace of space .

The linear shell of a system of vectors is a linear subspace of the space.

The system of vectors from is called a basis ,If

Any vector can be expressed as a linear combination of basis vectors:

2. The system of vectors is linearly independent.

Lemma Vector expansion coefficients according to the basis are uniquely determined.

Vector , composed of vector expansion coefficients according to the basis is called the coordinate vector of the vector in the basis .

Designation . This entry emphasizes that the coordinates of the vector depend on the basis.

Linear spaces

Definitions

Let a set of elements of arbitrary nature be given. Let two operations be defined for the elements of this set: addition and multiplication by any real number: , and set closed regarding these operations: . Let these operations obey the axioms:

3. There is a zero vector with the property for ;

4. for each there is an inverse vector with the property ;

6. for , ;

7. for , ;

Then such a set is called linear (vector) space, its elements are called vectors, and - to emphasize their difference from the numbers from - the latter are called scalars 1) . A space consisting of only one zero vector is called trivial .

If in axioms 6 - 8 we allow multiplication by complex scalars, then such a linear space is called comprehensive. To simplify our reasoning, in what follows we will consider only real spaces.

A linear space is a group with respect to the operation of addition, and an Abelian group.

The uniqueness of the zero vector and the uniqueness of the vector inverse to the vector are easily proven: , it is usually designated .

A subset of a linear space that is itself a linear space (that is, closed under addition of vectors and multiplication by an arbitrary scalar) is called linear subspace space. Trivial subspaces A linear space is called itself and the space consisting of one zero vector.

Example. The space of ordered triples of real numbers

operations defined by the equalities:

The geometric interpretation is obvious: a vector in space, “tied” to the origin, can be specified in the coordinates of its end. The figure also shows a typical subspace of space: a plane passing through the origin. More precisely, the elements are vectors that begin at the origin and end at points in the plane. The closedness of such a set with respect to the addition of vectors and their dilation 2) is obvious.

Based on this geometric interpretation, a vector of an arbitrary linear space is often spoken of as point in space. Sometimes this point is called the "end of the vector". Apart from the convenience of associative perception, these words are not given any formal meaning: the concept of “end of a vector” is absent in the axiomatics of linear space.

Example. Based on the same example, we can give a different interpretation of vector space (embedded, by the way, in the very origin of the word “vector” 3)) - it defines a set of “shifts” of points in space. These shifts - or parallel translations of any spatial figure - are chosen to be parallel to the plane.

Generally speaking, with such interpretations of the concept of a vector, everything is not so simple. Attempts to appeal to its physical meaning - as an object that has size And direction- cause a fair rebuke from strict mathematicians. The definition of a vector as an element of vector space is very reminiscent of the episode with sepulchami from the famous science fiction story by Stanislaw Lem (see ☞HERE). Let's not get hung up on formalism, but explore this fuzzy object in its particular manifestations.

Example. A natural generalization is space: row or column vector space . One way to specify a subspace in is to specify a set of constraints.

Example. The set of solutions to a system of linear homogeneous equations:

forms a linear subspace of the space. In fact, if

The solution of the system, then

The same solution for any . If

Another solution to the system, then

It will also be her decision.

Why are there many solutions to the system? heterogeneous equations does not form a linear subspace?

Example. Generalizing further, we can consider the space of “infinite” strings or sequences , usually the object of mathematical analysis - when considering sequences and series. You can consider lines (sequences) “infinite in both directions” - they are used in SIGNAL THEORY.

Example. The set of -matrices with real elements with the operations of matrix addition and multiplication by real numbers forms a linear space.

In the space of square order matrices, two subspaces can be distinguished: the subspace of symmetric matrices and the subspace of skew-symmetric matrices. In addition, subspaces form each of the sets: upper triangular, lower triangular idiagonal matrices.

Example. A set of polynomials of one variable degree exactly equal to the coefficients of (where is any of the sets or ) with the usual operations of addition of polynomials and multiplication by a number from does not form linear space. Why? - Because it is not closed under addition: the sum of polynomials will not be a polynomial of the th degree. But here are many polynomials of degree not higher

linear space forms; only to this set we must also add an identically zero polynomial 4). The obvious subspaces are . In addition, the subspaces will be the set of even and the set of odd polynomials of degree at most . The set of all possible polynomials (without restrictions on degrees) also forms a linear space.

Example. A generalization of the previous case will be the space of polynomials of several variables of degree at most with coefficients from . For example, the set of linear polynomials

forms a linear space. The set of homogeneous polynomials (forms) of degree (with the addition of an identically zero polynomial to this set) is also a linear space.

In terms of the above definition, the set of strings with integer components

considered with respect to the operations of componentwise addition and multiplication by integers scalars is not a linear space. However, all axioms 1 - 8 will be satisfied if we allow multiplication only by integer scalars. In this section we will not focus on this object, but it is quite useful in discrete mathematics, for example in ☞ CODING THEORY. Linear spaces over finite fields are considered ☞ HERE.

The variables are isomorphic to the space of symmetric matrices of the th order. The isomorphism is established by a correspondence, which we will illustrate for the case:

The concept of isomorphism is introduced in order to conduct the study of objects that arise in different areas of algebra, but with “similar” properties of operations, using the example of one sample, working out results on it that can then be cheaply replicated. Which linear space should we take “as a sample”? - See the ending of the next paragraph