As was foretold, we've added advertisements to the forums! If you have questions, or if you encounter any bugs, please visit this thread: https://forums.penny-arcade.com/discussion/240191/forum-advertisement-faq-and-reports-thread/

Options

Demerdar
Registered User regular

Hey ya'll, I was having a little difficulty with this problem for my Linear Algebra class, maybe someone can shed some light on it.

Given W is a subspace of R^n and has an orthogonal basis {u(1) .... u(p)}

Also, W' is its orthogonal compliment (perp W)

show that dim(W) + dim(perpW) = n.

Now, I'm thinking this has to do with the rank theorem where rank(A)+ nul (A) = n

If you were to put the vectors of W into a matrix A, then rank(A) = dim(W).

However I'm having a hard time connecting nul(A) with dim(perpW). Can anybody point me in the right direction? If clarification is needed let me know, this post feels a little disjointed.

Given W is a subspace of R^n and has an orthogonal basis {u(1) .... u(p)}

Also, W' is its orthogonal compliment (perp W)

show that dim(W) + dim(perpW) = n.

Now, I'm thinking this has to do with the rank theorem where rank(A)+ nul (A) = n

If you were to put the vectors of W into a matrix A, then rank(A) = dim(W).

However I'm having a hard time connecting nul(A) with dim(perpW). Can anybody point me in the right direction? If clarification is needed let me know, this post feels a little disjointed.

0

## Posts

As I understand it, the Rank-Nullity theorem is the big piece you need.

Without spoiling the whole problem for you: I would review what perp(W) in fact means, and then what you can say about the vectors in W wrt perp(W) as a result.

Does that help?

FuzzywhaleonAlso, if I were to put W in A, then perp(colA) = Nul(A^t).

Is it true that for orthogonal sets dim(NulA^t) = dim(NulA)?

I think I'm just missing a piece of the puzzle. Maybe you can tell me where I'm going wrong?

DemerdaronPretty sure this is correct. Here's why: You're right in saying that Perp(W) is the set of all vectors orthogonal to W. So you could if you like calculate this via the following matrix equation:

Ax=0, where A is the matrix made by the basis of W.

so if you were to find all x, you'd find a basis for vectors orthogonal to the vectors in A. The set of all linear combinations makes a nice space called Perp(W)!

As well Ax=0 is just solving for the Kernel of A.

is that more useful?

Anyone else please chime in if ive messed up

FuzzywhaleonSo if you solve Ax=0 with W as your matrix A, you're saying that Nul A forms a basis for perpW? In a theorem i'm reading it says that the orthogonal complement to RowA is NulA. And orthogonal complement of colA is Nul A^t.

Am I not seeing something here?

Demerdaronanyway: I'd do this by contradiction. Assume that dim(W)+dim(perp(W)) =/=n

this produces 2 cases: when dim(W)+dim(perp(W)) > n or when dim(W)+dim(perp(W)) < n

based on the fact that you have n basis vectors in your original space W you should get a contradiction in either case pretty quick

do you have access to the dimension theorem? If you do there's a pretty slick proof there as well

Little JimonYup. I ran this by another student in my office and he agreed, for what its worth. So as I said solving Ax=0 is finding all the vector perpendicular to the vectors that make the matrix A. Since A is made of basis vectors, we are finding a basis for the set of vectors perpendicular to those in A.

So That's a basis for the orthogonal complement, as we are doing our ops with a basis.

As well, Ax=0 is solving for the kernel. So Dim(Ker(A){the Nullity} should =dim(Perp(A)).

So then by the rank nullity theorem Dim(Im(A)) plus Dim(Ker(A)), which is the same as dim(Perp(A)), have to add up to N.

remember the matrix A is the matrix formed by the basis vectors of W, the subspace you started with.

Sound good?

FuzzywhaleonDemerdaron