### Problem 1

Let L_1 be a lower triangular matrix which is ones on the diagonal, zeros everywhere else, but possibly nonzero in the 1st column.

Generally, let L_k be a lower triangular matrix which is ones on the diagonal, zeros everywhere else, but possibly nonzero in the k_th column.

Prove that the inverse of L_k is a matrix with identical structure except the entries below the diagonal have their signs switched. (Do this proof with matrix notation, keeping track of indices.)

### Problem 2

Let L_k be as described above.

Let L be a lower triangular matrix which is ones, zeros everywhere else, but the k+1st, k+2nd, ... nth columns can have non-zero entries.

Prove that L_k * L is a lower triangular matrix which is ones, zeros everywhere else, but the kth, k+1st, k+2nd, ... nth columns have non-zero entries which are exactly those non-zero entries directly inherited from L and L_k.

### Problem 3

Do an operation count for the LU decomposition algorithm I gave in class to find out how many operations are involved for an n x n matrix A. (operations are +,-,*,/, and <,>.)

### Problem 4

Write an algorithm to solve Ly = b, where L is lower triangular. Do an operation count to find out how many operations are involved for an n x n matrix L.

### Problem 5

What was the operation count to solve Ux = y for an upper triangular n x n matrix U?

### Problem 6

What was the operation count to solve Ax = b for a n x n matrix A using Gaussian elimination?

### Problem 7

If you're going to do it only once, which way is faster?

### Problem 8

Given 10 vectors b_1, ... b_10 and A is 5 x 5, how many operations would it take to find the 10 solutions using Gaussian elimination each time? Using the LU decomposition?

### Problem 9

Given an n x n matrix A, how many problems would you have to solve before the LU decomposition approach would be faster than doing Gaussian eliminations each time?

### Computer problem 1

In matlab, if you type "flops(0)", it resets the flop counter. If you type "flops" sometime after that, it tells you how many flops were used since you last reset it.

Write a *.m file which given a number n returns a vector "icount". The program will first find 100 random n x n matrices, 100 random vectors b, and will find the resulting 100 solutions. You then make icount(1) = flops. Reset the flop counter. Now find a single n x n matrix. Perform its LU decomposition. Find 100 random vectors b, and find the solution x by first solving L y = b using low_solve.m and then solving U x = y using up_solve.m. Then make icount(2) = flops.

I've provided the routine up_solve.m, you have to write the routine low_solve.m (Let me know if you fail in copying and cannibalizing the routine up_solve.m to do this.)

You now have a *.m file which given n will return icount. Using this routine, create a long vector as follows:

>> [icount] = expt(2);
>> y(1,1) = icount(1); y(2,1) = icount(2);
>> [icount] = expt(3);
>> y(1,2) = icount(1); y(2,2) = icount(2);
>> [icount] = expt(4);
>> y(1,3) = icount(1); y(2,3) = icount(2);
>> [icount] = expt(5);
>> y(1,4) = icount(1); y(2,4) = icount(2);

Repeat this until you've performed the routine for the 10 x 10 matrices. (Hint: you might want to do this with another *.m file, if you're quicker with an editor than with typing.)

Let x = 2:10. Plot y(1,:) against x. >> hold on. Plot y(2,:) against x using a different line-type.

Explain your results.

### Computer problem 2

I mentioned that LU decompositions can also be done when Gaussian elimination preferred pivoting. For a 3 x 3 matrix A, Gaussian elimination with pivoting would produce two permutation matrices P_1 and P_2 and two gaussian matrices L_1 and L_2 such that
L_2 P_2 L_1 P_1 A = U
This can be rewritten as follows
L P A = L'_2 L'_1 P_2 P_1 A = U
if L'_2 = L_2, and L'_1 = P_2 L_1 inv(P_2). More generally, for a 4 x 4 matrix A, Gaussian elimination with pivoting produces
L_3 P_3 L_2 P_2 L_1 P_1 A = U
which can be rewritten as follows
L P A = L'_3 L'_2 L'_1 P_3 P_2 P_1 A = U
if L'_3 = L_3, L'_2 = P_3 L_2 inv(P_3), and L'_1 = P_3 P_2 L_1 inv(P_2) inv(P_3)

1) Verify the above statements. (This is just a formal manipulation.)

2) How do we know that the new matrices L'_1 and L'_2 are actually lower triangular? If you can't prove it, at least go through a 3 x 3 example by hand and verify that they really are.

3) Matlab will give us these matrices: [L,U,P] = LU(A) where PA = LU . To solve A x = b, this means it's good enough if we solve PA x = Pb = b'. So first multiply b by P to find a new right hand side. We are now on familiar territory. PA x = b' ==> LU x = b'. First solve L y = b', then solve U x = y.

4) Now, take the matrix A from exercise 6 on page 242. Take b = [0.3097; 0.2838; 0.4742; 0.4716]. First use my LU solver:

>> [L,U] = LU(A);
>> [y] = low_solve(L,b);
>> [x] = up_solve(U,y);
>> max(A*x-b)

Then use the matlab solver with pivoting:

>> [L,U,P] = lu(A);
>> bp = P*b;
>> [y] = low_solve(L,bp);
>> [x] = up_solve(U,y);
>> max(A*x-b)

What did you find? What did you learn? Could you have used the matlab version of the lu docomposition routine? (What happens if you'd typed "[L,U] = lu(A)"?)