Question

In: Math

In this assignment, we will explore four subspaces that are connected to a linear transformation. For...

In this assignment, we will explore four subspaces that are connected to a linear transformation. For the questions below, assume A is an m×n matrix with rank r, so that T(~x) = A~x is a linear transformation from R n to R m. Consider the following definitions:

• The transpose of A, denoted AT is the n × m matrix we get from A by switching the roles of rows and columns – that is, the rows of AT are the columns of A, and vice versa.

• The column space of A, denoted col(A), is the span of the columns of A. col(A) is a subspace of R m and is the same as the image of T.

• The row space of A, denoted row (A), is the span of the rows of A. row (A) is a subspace of Rn .

• The null space of A, denoted null(A), is the subspace of Rn made up of vectors x such that Ax = 0 and is equal to the kernel of T.

• The left null space of A, denoted null( AT) , is the subspace of Rm made up of vectors y such that ATy = 0.

col(A), row (A), null(A), and null (AT ) are called the four fundamental subspaces for A.

We showed in class that the pivot columns of A form a basis for col(A), and that the vectors in the general solution to Ax = 0 form a basis for null(A). Likewise, the vectors in the general solution to ATy = 0 form a basis for null (AT ).

Q3: Show that row (A) and null(A) are orthogonal complements.

Q4: Show that col(A) and null ( AT ) are orthogonal complements.

Q5: Assuming A is an m × n matrix with rank r, what are the dimensions of the four subspaces for A?

Solutions

Expert Solution

Q3 Null(A) is obtained by solving Ax=0, This mean that every row vector is perpendicular to every solution of Ax=0. To elaborate it, the dot product of every row vector of A and every vector x would be 0 only when the angle between the two is 90 degrees. This shows that row(A) and Null(A) orthogonal complements.

Q4. Null(AT) would be obtained by solving ATy=0. This means that every row vector of ATis perpendicular to every solution of AT y=0. By the same logic as given above, since the dot product AT y is 0, every y vector will be perpendiular to every row vector of AT . Now row vectors of AT are the column vectors of A, hence it can be concluded that col(A) and Null(AT ) are orthogonal complements.

Q5 Since A is mxn matrix , it has m rows and n columns. Column space would be in Rm . Since rank of A is r, it would have r pivot columns. Hence dimension of col(A) would be r. Since A has n columns, the dimension of Null(A) would be n-r.

Next, AT would be a nxm matrix, it would have n rows and m columns. The row space of A, row(A) would be the col(AT). Since the rank of A is r, rank of AT would also be r. This means that it will a r pivot columns. Dimension of col(AT) would be r. This means , the dimension of row(A)=r. Since there are m columns in AT, the dimension of null(AT) will be m-r.


Related Solutions

In this assignment, we will explore some simple expressions and evaluate them. We will use an...
In this assignment, we will explore some simple expressions and evaluate them. We will use an unconventional approach and severely limit the expressions. The focus will be on operator precedence. The only operators we support are logical or (|), logical and (&), less than (<), equal to (=), greater than (>), add (+), subtract (-), multiply (*) and divide (/). Each has a precedence level from 1 to 5 where higher precedence operators are evaluated first, from left-to-right. For example,...
In this assignment , we will explore the life of Isaac Newton, please listen to the...
In this assignment , we will explore the life of Isaac Newton, please listen to the radio show from the BBC and do a little research about Isaac Newton. Listen the first episode (Age of Ingenuity) of the following podcast show: Seven Ages of Science BBC Radio. (http://www.bbc.co.uk/programmes/b0380wf8/episodes/downloads) You can also find this series from iTunes or any other podcast apps. This first episode (The Age of ingenuity) describes the age of Isaac Newton. (I highly recommend you listen to...
Assignment (C language) We will simulate the status of 8 LEDs that are connected to a...
Assignment (C language) We will simulate the status of 8 LEDs that are connected to a microcontroller. Assume that the state of each LED (ON or OFF) is determined by each of the bits (1 or 0) in an 8-bit register (high-speed memory). Declare a char variable called led_reg and initialize it to 0. Assume that the least-significant bit (lsb) controls LED#0 and the most-significant bit (msb) controls LED#7. In the main function, build and present a menu to the...
for each matrix A below, describe the invariant subspaces for the induced linear operator T on...
for each matrix A below, describe the invariant subspaces for the induced linear operator T on F^2 that maps each v set of F^2 to T(v)=Av. (a) [4,-1;2,1], (b) [0,1;-1,0], (c) [2,3;0,2], (d) [1,0;0,0]
In parts a and b, A is a matrix representation of a linear transformation in the...
In parts a and b, A is a matrix representation of a linear transformation in the standard basis. Find a matrix representation of the linear transformation in the new basis. show all steps. a. A = 2x2 matrix with the first row being 2 and -1, and the second row being 1 and 3; new basis = {<1, 2> , < 1, 1> } b. A = 3x3 matrix with the first row being 2, 1, -1, the second row...
. Let T : R n → R m be a linear transformation and A the...
. Let T : R n → R m be a linear transformation and A the standard matrix of T. (a) Let BN = {~v1, . . . , ~vr} be a basis for ker(T) (i.e. Null(A)). We extend BN to a basis for R n and denote it by B = {~v1, . . . , ~vr, ~ur+1, . . . , ~un}. Show the set BR = {T( r~u +1), . . . , T( ~un)} is a...
a. What is data transformation in the context of linear regression and why it is needed?...
a. What is data transformation in the context of linear regression and why it is needed? b. Please list different transformation techniques with a brief explanation for each.
Let T : R2 → R3 be a linear transformation such that T( e⃗1 ) =...
Let T : R2 → R3 be a linear transformation such that T( e⃗1 ) = (2,3,-5) and T( e⃗2 ) = (-1,0,1). Determine the standard matrix of T. Calculate T( ⃗u ), the image of ⃗u=(4,2) under T. Suppose T(v⃗)=(3,2,2) for a certain v⃗ in R2 .Calculate the image of ⃗w=2⃗u−v⃗ . 4. Find a vector v⃗ inR2 that is mapped to ⃗0 in R3.
Let ? and W be finite dimensional vector spaces and let ?:?→? be a linear transformation....
Let ? and W be finite dimensional vector spaces and let ?:?→? be a linear transformation. We say a linear transformation ?:?→? is a left inverse of ? if ST=I_v, where ?_v denotes the identity transformation on ?. We say a linear transformation ?:?→? is a right inverse of ? if ??=?_w, where ?_w denotes the identity transformation on ?. Finally, we say a linear transformation ?:?→? is an inverse of ? if it is both a left and right...
The linear transformation is such that for any v in R2, T(v) = Av. a) Use...
The linear transformation is such that for any v in R2, T(v) = Av. a) Use this relation to find the image of the vectors v1 = [-3,2]T and v2 = [2,3]T. For the following transformations take k = 0.5 first then k = 3, T1(x,y) = (kx,y) T2(x,y) = (x,ky) T3(x,y) = (x+ky,y) T4(x,y) = (x,kx+y) For T5 take theta = (pi/4) and then theta = (pi/2) T5(x,y) = (cos(theta)x - sin(theta)y, sin(theta)x + cos(theta)y) b) Plot v1 and...
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT