Linear algebra friedberg pdf download
Linear Algebra. Seller Rating:. About this Item: Condition: New. Brand new book. This is an international edition textbook with identical content as the US version. Buy with confidence from a 5-star US based seller. The power behind this applied linear algebra lies in the fact that techniques of applied linear algebra can be implement using computers to solve real-world problems in science, technology, engineering and mathematics.
He 'Introduction to Applied Linear Algebra fills a very important role that has been sorely missed so far in the plethora of other textbooks on the topic, which are filled with discussions of nullspaces, rank, complex eigenvalues and other concepts, and by way of 'examples', typically show toy problems. Download Linear algebra, Stephen H. Friedberg, Arnold J Recommend Documents.
Friedberg, Insel, and Spence Linear algebra, 4th ed. Norton and Tom J. Coope Items 1 - 7 - of the arithmetic operations, equals, and operational laws, combined with the. Friedberg Arnold J. Insel Lawrence E. Author: Stephen H.
This clearly proves that our two representations of v cannot be distinct, as required. Let W be a subspace of a vector space V. Under what conditions are there only a finite number of distinct subsets S of W such that S generates W? Afterwards, some proofs concerning linear dependence and linear independence are given. Determine whether the following sets are linearly dependent or linearly independent. In Fn , let ej denote the vector whose jth coordinate is 1 and whose other coordinates are 0.
Recall from Example 3 in Section 1. Find a linearly independent subset that generates this subspace. Let u and v be distinct vectors in a vector space V. This completes the proof. How many vectors are there in span S? Justify your answer. The span of S will consist of every linear combination of vectors in S, or in some subset of S. The field Z2 only admits two possible scalars, 0 and 1, therefore every subset of S can be regarded as a linear combination and vice versa.
The power set of S that is, the set of all subsets of S will contain 2n elements by a theorem in set theory , since S contains n elements. There is therefore a bijection between the set of all subsets of S and the set of all linear combinations of vectors in S. For this reason, the cardinality of span S must also be 2n.
Prove Theorem 1. Assume S1 is linearly dependent. We wish to show that S2 is linearly dependent. Since S1 is linearly dependent, there exist a1 , a2 ,. For this reason, S2 is also linearly dependent. Assume S2 is linearly independent. We wish to show that S1 is linearly independent.
To prove this, assume for the sake of contradiction that S1 is linearly dependent. Then by the argument above, S2 is linearly dependent. Since linear independence and linear dependence are mutually exclusive properties, this is a contradiction, and the corollary follows. Let V be a vector space over a field of characteristic not equal to two. Again, we see that at least one of these scalars is nonzero.
Assume that S is a linearly dependent subset of a vector space V over a field F. Then S is nonempty for otherwise it cannot be linearly dependent and contains vectors v1 ,. If there exist v, u1 ,. Assume S is linearly dependent. Otherwise we can simply isolate un in the equation and use the same reasoning. Exercise 14 then implies that S is linearly dependent. Prove that a set S of vectors is linearly independent if and only if each finite subset of S is linearly independent.
Assume S is linearly independent. Assume, for the sake of contradiction, that S is linearly dependent. Then there exist vectors v1 ,. Let M be a square upper triangular matrix as defined in Exercise 12 of Section 1.
Prove that the columns of M are linearly independent. Consider P 1. Therefore P 1 holds. Theorem 1. Corollaries to this theorem are the fact that every basis in a finite-dimensional vector space contains the same number of elements, that any finite generating set for a vector space V contain at least dim V vectors, and that a generating set containing exactly dim V vectors is a basis.
Additionally, any linearly independent subset of V with exactly dim V vectors is a basis, and every linearly independent subset of V can be extended to a basis for V. Determine which of the following sets are bases for R3.
By a corollary to the replacement theorem, any linearly independent subset of 3 vectors generates R3 , and so is a basis for R3. Therefore this problem is reduced simply to determining whether the set is linearly independent, which of course involves solving the usual homogeneous system of equations that is, a system of equations where all the right-hand sides are equal to 0.
This has been demonstrated previously in the document, and so will not be shown here. This is a set of 3 polynomials, and the dimension of P3 R is 4. By a corollary to the replacement theorem, any set which generates a 4-dimensional vector space must contain at least 4 vectors. This set contains 4 vectors, and R3 is generated by three vectors: 1, 0, 0 , 0, 1, 0 , and 0, 0, 1.
Next, guess that u5 is not a linear combination of the two vectors already in the set. As it turns out, it is, and therefore by a corollary to the replacement theorem, the set generates R3 , and so is a basis for R3 also.
Find the unique representation of an arbitrary vector a1 , a2 , a3 , a4 in F4 as a linear combination of u1 , u2 , u3 , and u4. In each part, use the Lagrange interpolation formula to construct the polynomial of smallest degree whose graph contains the following points. Let u and v be distinct vectors of a vector space V. Therefore both sets are linearly independent and generate V, so both are bases for V.
Find a basis for this subspace. Find a basis for W. What is the dimension of W? We will prove this set is a basis for W. First, note that the matrix 0 has 0 for all of its entries. Hence all the Skk i are zero. Complete the proof of Theorem 1. So Pn F is closed under addition and scalar multiplication. It therefore follows from Theorem 1.
Moreover, the sum of two continuous functions is continuous, and the product of a real number and a continuous function is continuous.
Clearly the zero matrix is a diagonal matrix because all of its entries are 0. Any intersection of subspaces of a vector space V is a subspace of V. Let C be a collection of subspaces of V, and let W denote the intersection of the subspaces in C. Then x and y are contained in each subspace in C. Having shown that the intersection of subspaces of a vector space V is a subspace of V, it is natural to consider whether or not the union of subspaces of V is a subspace of V.
It is easily seen that the union of subspaces must contain the zero vector and be closed under scalar multiplication, but in general the union of subspaces of V need not be closed under addition. In fact, it can be readily shown that the union of two subspaces of V is a subspace of V if and only if one of the subspaces contains the other.
See Exercise There is, however, a natural way to combine two subspaces W1 and W2 to obtain a subspace that contains both W1 and W2. This idea is explored in Exercise Determine the transpose of each of the matrices that follow.
In addition, if the matrix is square, compute its trace. Prove that diagonal matrices are symmetric matrices. Justify your answers. Let W1 , W3 , and W4 be as in Exercise 8. Let W1 and W2 be subspaces of a vector space V.
Clearly, a skewsymmetric matrix is square. Compare this exercise with Exercise An important special case occurs when A is the origin. This is proved as Theorem 1. Let V be a vector space and S a nonempty subset of V. In this case we also say that v is a linear combination of u1 , u2 ,. Thus the zero vector is a linear combination of any nonempty subset of V. Watt and Annabel L. Department of Agriculture, Washington, D. This question often reduces to the problem of solving a system of linear equations.
In Chapter 3, we discuss a general method for using matrices to solve any system of linear equations. To solve system 1 , we replace it by another system with the same solutions, but which is easier to solve. The procedure to be used expresses some of the unknowns in terms of others by eliminating certain unknowns from all the equations except one. This need not happen in general. The procedure just illustrated uses three types of operations to simplify the original system: 1. In Section 3.
Note that we employed these operations to obtain a system of equations that had the following properties: 1. Once a system with properties 1, 2, and 3 has been obtained, it is easy to solve for some of the unknowns in terms of the others as in the preceding example. See Example 2. We return to the study of systems of linear equations in Chapter 3. We discuss there the theoretical basis for this method of solving systems of linear equations and further simplify the procedure by use of matrices. We now name such a set of linear combinations.
Let S be a nonempty subset of a vector space V. The span of S, denoted span S , is the set consisting of all linear combinations of the vectors in S. In this case, the span of the set is a subspace of R3. This fact is true in general. The span of any subset S of a vector space V is a subspace of V. Moreover, any subspace of V that contains S must also contain the span of S.
Then there exist vectors u1 , u2 ,. Thus span S is a subspace of V. Now let W denote any subspace of V that contains S. In this case, we also say that the vectors of S generate or span V. So any linear combination of these matrices has equal diagonal entries. It is natural to seek a subset of W that generates W and is as small as possible. In the next section we explore the circumstances under which a vector can be removed from a generating set to obtain a smaller generating set.
Solve the following systems of linear equations by the method introduced in this section. In each part, determine whether the given vector is in the span of S. Show that the vectors 1, 1, 0 , 1, 0, 1 , and 0, 1, 1 generate F3.
In Fn , let ej denote the vector whose jth coordinate is 1 and whose other coordinates are 0. Interpret this result geometrically in R3. Let S1 and S2 be subsets of a vector space V. Let V be a vector space and S a subset of V with the property that whenever v1 , v2 ,. Prove that every vector in the span of S can be uniquely written as a linear combination of vectors of S.
Let W be a subspace of a vector space V. Indeed, the smaller that S is, the fewer computations that are required to represent vectors in W.
The search for this subset is related to the question of whether or not some vector in S is a linear combination of the other vectors in S. The reader should verify that no such solution exists.
This does not, however, answer our question of whether some vector in S is a linear combination of the other vectors in S.
In this case we also say that the vectors of S are linearly dependent. For any vectors u1 , u2 ,. We call this the trivial representation of 0 as a linear combination of u1 , u2 ,.
Thus, for a set to be linearly dependent, there must exist a nontrivial representation of 0 as a linear combination of vectors in the set. We show that S is linearly dependent and then express one of the vectors in S as a linear combination of the other vectors in S.
To show that Sec. A subset S of a vector space that is not linearly dependent is called linearly independent. As before, we also say that the vectors of S are linearly independent. The following facts about linearly independent sets are true in any vector space.
The empty set is linearly independent, for linearly dependent sets must be nonempty. A set consisting of a single nonzero vector is linearly independent. A set is linearly independent if and only if the only representations of 0 as linear combinations of its vectors are trivial representations. This technique is illustrated in the examples that follow. Equating the corresponding coordinates of the vectors on the left and the right sides of this equation, we obtain the following system of linear equations.
If S1 is linearly dependent, then S2 is linearly dependent. If S2 is linearly independent, then S1 is linearly independent. Earlier in this section, we remarked that the issue of whether S is the smallest generating set for its span is related to the question of whether some vector in S is a linear combination of the other vectors in S.
Thus the issue of whether S is the smallest generating set for its span is related to the question of whether S is linearly dependent. This equation implies that u3 or alternatively, u1 or u2 is a linear combination of the other vectors in S. More generally, suppose that S is any linearly dependent set containing two or more vectors. It follows that if no proper subset of S generates the span of S, then S must be linearly independent. Another way to view the preceding statement is given in Theorem 1.
Let S be a linearly independent subset of a vector space V, and let v be a vector in V that is not in S. Since v is a linear combination of u2 ,. Then there exist vectors v1 , v2 ,.
Linearly independent generating sets are investigated in detail in Section 1. Recall from Example 3 in Section 1. Find a linearly independent set that generates this subspace.
Give an example of three linearly dependent vectors in R3 such that none of the three is a multiple of another. How many vectors are there in span S? Prove Theorem 1. A linearly independent generating set for W possesses a very useful property—every vector in W can be expressed in one and only one way as a linear combination of the vectors in the set. This property is proved below in Theorem 1. It is this property that makes linearly independent generating sets the building blocks of vector spaces.
We call this basis the standard basis for Pn F. The proof of the converse is an exercise. Thus v determines a unique n-tuple of scalars a1 , a2 ,. This fact suggests that V is like the vector space Fn , where n is the number of vectors in the basis for V. We see in Section 2. Otherwise S contains a nonzero vector u1. Continue, if possible, choosing vectors u2 ,. By Theorem 1. This method is illustrated in the next example.
It can be shown that S generates R3. We can select a basis for R3 that is a subset of S by the technique used in proving Theorem 1. Let V be a vector space that is generated by a set G containing exactly n vectors, and let L be a linearly independent subset of V containing exactly m vectors.
The proof is by mathematical induction on m. By the corollary to Theorem 1. Thus there exist scalars a1 , a2 ,. Moreover, some bi , say b1 , is nonzero, for otherwise we obtain the same contradiction.
This completes the induction. Then every basis for V contains the same number of vectors. The unique number of vectors Sec. The following results are consequences of Examples 1 through 4. This set is, in fact, a basis for P F. In Section 1. Let V be a vector space with dimension n. Corollary 1 implies that H contains exactly n vectors. Since a subset of G contains n vectors, G must contain at least n vectors. Since L is also linearly independent, L is a basis for V.
Example 13 It follows from Example 4 of Section 1. It follows from Example 4 of Section 1. This procedure also can be used to extend a linearly independent set to a basis, as c of Corollary 2 asserts is possible. For this reason, we summarize here the main results of this section in order to put them into better perspective.
A basis for a vector space V is a linearly independent subset of V that generates V. Thus if the dimension of V is n, every basis for V contains exactly n vectors. Moreover, every linearly independent subset of V contains no more than n vectors and can be extended to a basis for V by including appropriately chosen vectors. Also, each generating set for V contains at least n vectors and can be reduced to a basis for V by excluding appropriately chosen vectors. The Venn diagram in Figure 1.
Linearly independent Bases sets Figure 1. Continue choosing vectors, x1 , x2 ,. Let S be a basis for W. Because S is a linearly independent subset of V, Corollary 2 of the replacement theorem guarantees that S can be extended to a basis for V. Since R2 has dimension 2, subspaces of R2 can be of dimensions 0, 1, or 2 only. Any subspace of R2 having dimension 1 consists of all scalar multiples of some nonzero vector in R2 Exercise 11 of Section 1.
Similarly, the subspaces of R3 must have dimensions 0, 1, 2, or 3. Interpreting these possibilities geometrically, we see that a subspace of dimension zero must be the origin of Euclidean 3-space, a subspace of dimension 1 is a line through the origin, a subspace of dimension 2 is a plane through the origin, and a subspace of dimension 3 is Euclidean 3-space itself.
The Lagrange Interpolation Formula Corollary 2 of the replacement theorem can be applied to obtain a useful formula. The polynomials f0 x , f1 x ,. Note that each fi x is a polynomial of degree n and hence is in Pn F. This representation is called the Lagrange interpolation formula.
Notice Sec. A vector space cannot have more than one basis. Then S1 cannot contain more vectors than S2. Determine which of the following sets are bases for R3. Determine which of the following sets are bases for P2 R.
Let W denote the subspace of R5 consisting of all the vectors having coordinates that sum to zero. Find the unique representation of an arbitrary vector a1 , a2 , a3 , a4 in F4 as a linear combination of u1 , u2 , u3 , and u4.
In each part, use the Lagrange interpolation formula to construct the polynomial of smallest degree whose graph contains the following points. Let u and v be distinct vectors of a vector space V. Let u, v, and w be distinct vectors of a vector space V. Find a basis for this subspace. What are the dimensions of W1 and W2?
Find a basis for W. What is the dimension of W? Find a basis for the vector space in Example 5 of Section 1. Complete the proof of Theorem 1. Let f x be a polynomial of degree n in Pn R. If V and W are vector spaces over F of dimensions m and n, determine the dimension of Z. See Examples 11 and Our principal goal here is to prove that every vector space has a basis. There is no obvious way to construct a basis for this space, and yet it follows from the results of this section that such a basis does exist.
Instead, a more general result called the maximal principle is needed. Before stating this principle, we need to introduce some terminology. Let F be a family of sets. A member M of F is called maximal with respect to set inclusion if M is contained in no member of F other than M itself.
This family F is called the power set of S. The set S is easily seen to be a maximal element of F. Then S and T are both maximal elements of F. Then F has no maximal element. Maximal Principle.
In Theorem 1. Let S be a subset of a vector space V. A maximal linearly independent subset of S is a subset B of S satisfying both of the following conditions. For a treatment of set theory using the Maximal Principle, see John L. In this case, however, any subset of S consisting of two polynomials is easily shown to be a maximal linearly independent subset of S. Thus maximal linearly independent subsets of a set need not be unique.
Our next result shows that the converse of this statement is also true. Let V be a vector space and S a subset that generates V. Since Theorem 1. Thus a subset of a vector space is a basis if and only if it is a maximal linearly independent subset of the vector space.
Therefore we can accomplish our goal of proving that every vector space has a basis by showing that every vector space contains a maximal linearly independent subset. This result follows immediately from the next theorem. Let S be a linearly independent subset of a vector space V. There exists a maximal linearly independent subset of V that contains S. Let F denote the family of all linearly independent subsets of V that contain S.
In order to show that F contains a maximal element, we must show that if C is a chain in F, then there exists a member U of F that contains each member of C. We claim that U , the union of the members of C, is the desired set. Thus we need only prove that U is linearly independent.
But since C is a chain, one of these sets, say Ak , contains all the others. It follows that U is linearly independent. The maximal principle implies that F has a maximal element. This element is easily seen to be a maximal linearly independent subset of V that contains S. Every vector space has a basis. It can be shown, analogously to Corollary 1 of the replacement theorem p. Sets have the same cardinality if there is a one-to-one and onto mapping between them.
See, for example, N. Jacobson, Lectures in Abstract Algebra, vol. Van Nostrand Company, New York, , p. Exercises extend other results from Section 1. See Exercise 21 in Section 1.
Hint: 62 Chap. Prove that any basis for W is a subset of a basis for V. Prove the following generalization of Theorem 1. Hint: Apply the maximal principle to the family of all linearly independent subsets of S2 that contain S1 , and proceed as in the proof of Theorem 1.
Prove the following generalization of the replacement theorem. These special functions are called linear transformations, and they abound in both pure and applied mathematics. Later we use these transformations to study rigid motions in Rn Section 6. In the remaining chapters, we see further examples of linear transformations occurring in both the physical and the social sciences. Many of these transformations are studied in more detail in later sections.
Recall that a function T with domain V and codomain W is denoted by 64 Sec. See Appendix B. Let V and W be vector spaces over F. See Exercises 38 and We often simply call T linear.
See Exercise 7. T is linear if and only if, for x1 , x2 ,. So T is linear. The main reason for this is that most of the important geometrical transformations are linear. We leave the proofs of linearity to the reader. See Figure 2. T is called the projection on the x -axis.
Then T is a linear transformation by Exercise 3 of Section 1. It is clear that both of these transformations are linear. We often write I instead of IV.
We now turn our attention to two very important sets associated with linear transformations: the range and null space. The determination of these sets allows us to examine more closely the intrinsic properties of a linear transformation. The next result shows that this is true in general. Theorem 2. To clarify the notation, we use the symbols 0 V and 0 W to denote the zero vectors of V and W, respectively. With this accomplished, a basis for the range is easy to discover using the technique of Example 6 of Section 1.
It should be noted that Theorem 2. The next example illustrates the usefulness of Theorem 2. The null space and range are so important that we attach special names to their respective dimensions.
In other words, the more vectors that are carried into 0 , the smaller the range. The same heuristic reasoning tells us that the larger the rank, the smaller the nullity. This balance between rank and nullity is made precise in the next theorem, appropriately called the dimension theorem.
First we prove that S generates R T. Using Theorem 2. Now we prove that S is linearly independent. Hence S is linearly independent. Interestingly, for a linear transformation, both of these concepts are intimately connected to the rank and nullity of the transformation.
This is demonstrated in the next two theorems. This means that T is one-to-one. The reader should observe that Theorem 2. Surprisingly, the conditions of one-to-one and onto are equivalent in an important special case. Then the following are equivalent. Now, with the use of Theorem 2. See Exercises 15, 16, and The linearity of T in Theorems 2.
The next two examples make use of the preceding theorems in determining whether a given linear transformation is one-to-one or onto. We conclude from Theorem 2. Hence Theorem 2. Example 13 illustrates the use of this result. Clearly T is linear and one-to-one. This technique is exploited more fully later. One of the most important properties of a linear transformation is that it is completely determined by its action on a basis. This result, which follows from the next theorem and corollary, is used frequently throughout the book.
Then compute the nullity and rank of T, and verify the dimension theorem. Finally, use the appropriate theorems in this section to determine whether T is one-to-one or onto. Recall Example 4, Section 1. Prove properties 1, 2, 3, and 4 on page Prove that the transformations in Examples 2 and 3 are linear.
For each of the following parts, state why T is not linear. What is T 2, 3? Is T one-to-one? What is T 8, 11? Prove that S is linearly independent if and only if T S is linearly independent. Recall that T is linear. Prove that T is onto, but not one-to-one. Let V and W be vector spaces with subspaces V1 and W1 , respectively. Let V be the vector space of sequences described in Example 5 of Section 1.
T and U are called the left shift and right shift operators on V, respectively. Describe geometrically the possibilities for the null space of T. Hint: Use Exercise Describe T if W1 is the zero subspace. Warning: Do not assume that W is T-invariant or that T is a projection unless explicitly stated. If W is T-invariant, prove that TW is linear.
Suppose that W is T-invariant. Prove Theorem 2. Prove the following generalization of Theorem 2. By Exercise 34, there exists a linear transformation Sec. Let V be a vector space and W be a subspace of V. Compare the method of solving b with the method of deriving the same result as outlined in Exercise 35 of Section 1. In fact, we develop a one-to-one correspondence between matrices and linear transformations that allows us to utilize properties of one to study properties of the other.
Now that we have the concept of ordered basis, we can identify abstract vectors in an n-dimensional vector space with n-tuples. We study this transformation in Section 2. To make this more explicit, we need some preliminary discussion about the addition and scalar multiplication of linear transformations. We are fortunate, however, to have the result that both sums and scalar multiples of linear transformations are also linear. In Section 2. Then Sec.
So a is proved, and the proof of b is similar. L V, W is a vector space. Complete the proof of part b of Theorem 2. Prove part b of Theorem 2. Prove that T is linear.
Recall by Exercise 38 of Section 2. By Theorem 2. Suppose that W is a T-invariant subspace of V see the exercises of Section 2. Let V and W be vector spaces, and let S be a subset of V. Prove the following statements. The question now arises as to how the matrix representation of a composite of linear transformations is related to the matrix representation of each of the associated linear transformations.
Let V be a vector space. A more general result holds for linear transformations that have domains unequal to their codomains. See Exercise 8. Therefore the transpose of a product is the product of the transposes in the opposite order. We illustrate Theorem 2. To illustrate Theorem 2. When the context is clear, we sometimes omit the subscript n from In.
See Exercise 5. To see why, assume that the cancellation law is valid. The proof of b is left as an exercise. See Exercise 6. It follows see Exercise 14 from Theorem 2.
It utilizes both the matrix representation of a linear transformation and matrix multiplication in order to evaluate the transformation at any given vector. Identifying column vectors as matrices and using Theorem 2.
This transformation is probably the most important tool for transferring properties about transformations to analogous properties about matrices and vice versa.
For example, we use it to prove that matrix multiplication is associative. We call LA a left-multiplication transformation. These properties are all quite natural and so are easy to remember. The fact that LA is linear follows immediately from Theorem 2. The proof of the converse is trivial. The uniqueness of C follows from b. We now use left-multiplication transformations to establish the associativity of matrix multiplication.
Using e of Theorem 2. So from b of Theorem 2.
0コメント