next up previous
Next: Algebra Up: Optimization/Numerical Linear Algebra Previous: Outline

Sample questions

1.
Compare and contrast the line search and trust region methods of globalizing a quasi-Newton method. Your discussion should touch on the following points:
(a)
The cost of taking a step (that is, of solving the line search subproblem and trust region subproblem), and the complexity of the algorithms.
(b)
Dealing with nonconvexity.
(c)
Convergence theorems.
2.
Derive the BFGS update used for approximating the Hessian of the objective function in an unconstrained minimization problem. Explain the rationale for the steps in the derivation.
3.
(a)
State carefully and prove the first-order necessary condition for $f:{\bf {\rm R}}^n\rightarrow{\bf {\rm R}}$ to have a local minimum at x=x*.
(b)
Give an example to show that the first-order condition is only necessary, not sufficient.
(c)
State carefully and prove the second-order necessary condition for $f:{\bf {\rm R}}^n\rightarrow{\bf {\rm R}}$ to have a local minimum at x=x*.
(d)
Give an example to show that the second-order condition is only necessary, not sufficient.
4.
(a)
Let $\{x_k\}$ be a sequence in ${\bf {\rm R}}^n$, and suppose

\begin{displaymath}x_k\rightarrow x^*\in{\bf {\rm R}}^n\ \mbox{as}\ k\rightarrow\infty.
\end{displaymath}

Define `` $x_k\rightarrow x^*$ q-quadratically.''
(b)
Let $f:{\bf {\rm R}}^n\rightarrow{\bf {\rm R}}$ be given. Newton's method for minimizing f is locally q-quadratically convergent. State carefully and prove this theorem.
5.
Suppose $f:{\bf {\rm R}}^n\rightarrow{\bf {\rm R}}$ is convex.
(a)
State and prove a theorem indicating that the usual first-order necessary condition is, in this case, a sufficient condition.
(b)
Prove that every local minimum of f is, in fact, a global minimum.
6.
Consider the equality-constrained problem

 \begin{displaymath}
\begin{array}{ll}
\min_{x\in{\bf {\rm R}}^n}&f(x)\\
\mbox{s.t.}&g(x)=0,
\end{array}\end{displaymath} (2)

where $f:{\bf {\rm R}}^n\rightarrow{\bf {\rm R}}$ and $g:{\bf {\rm R}}^n\rightarrow{\bf {\rm R}}^m$ are smooth functions.
(a)
Explain how to apply the quadratic penalty method to ([*]). How does one obtain an estimate of the Lagrange multiplier?
(b)
Explain how to apply the augmented Lagrangian method to ([*]). How does one obtain an estimate of the Lagrange multiplier?
7.
Recall that the $\ell_1$ norm of $x\in{\bf {\rm R}}^n$ is defined by

\begin{displaymath}\Vert x\Vert _1=\sum_{i=1}^n\vert x_i\vert.
\end{displaymath}

Derive the formula for the corresponding induced matrix norm of $A\in{\bf {\rm R}}^{n\times n}$, and prove that it is correct.
8.
(a)
What is the condition number of a matrix $A\in{\bf {\rm R}}^{n\times n}$?
(b)
Explain how this condition number is related to the problem of computing the solution x to Ax=b, where b is regarded as the data of the problem, and A is regarded as being known exactly.
9.
Consider the least-squares problem Ax=b, where $A\in{\bf {\rm R}}^{m\times n}$, m>n, and $b\in{\bf {\rm R}}^m$ are given, and x is to be determined.
(a)
Assume that A has full rank. Explain how to solve the least-squares problem using:
i.
the normal equations;
ii.
the QR factorization of A;
iii.
the SVD of A.
In each case, your explanation must include a justification that the algorithm leads to the solution of the least-squares problem (e.g. explain why the solution of the normal equations is the solution of the least-squares problem).
(b)
Discuss the advantages and disadvantages of each of the above methods. Which is the method of choice for the full rank least-squares problem?
10.
(a)
Give a simple example to show that Gaussian elimination without partial pivoting is unstable in finite-precision arithmetic. (Hint: The example can be as small as $2\times 2$.)
(b)
Using the concept of backward error analysis, explain the conditions under which Gaussian elimination with partial pivoting can be unstable in finite-precision arithmetic. (Note: This question does not ask you to perform a backward error analysis. Rather, you can quote standard results in your explanation.)
(c)
Give an example to show that Gaussian elimination with partial pivoting can be unstable.
11.
(a)
Suppose that $A\in{\bf {\rm R}}^{n\times n}$ has eigenvalues

\begin{displaymath}\lambda_n>\lambda_{n-1}\ge\lambda_{n-2}\ge\cdots\ge\lambda_1.
\end{displaymath}

Explain how to perform the power method, and under what conditions it converges to an eigenvalue.
(b)
Explain the idea of simultaneous iteration.
(c)
Explain the QR algorithm and its relationship to simultaneous iteration.
12.
Suppose that $A\in{\bf {\rm R}}^{n\times n}$ is invertible, B is an estimate of A-1, and AB = I+E. Show that the relative error in B is bounded by $\Vert E\Vert$ (using an arbitrary matrix norm).
13.
Show that if A is symmetric positive definite and banded, say aij = 0 for |i-j| > p, then the Cholesky factor B of A satisfies bij = 0 for j > i or j < i-p.
14.
Suppose that Gaussian elimination (without partial pivoting) is applied to a symmetric positive definite matrix A. Write

\begin{displaymath}A^{(k)} = E_{k-1}E_{k-2}\cdots E_1A,
\end{displaymath}

where each Ej is an elementary (lower triangular) matrix (left-multiplication by Ej accomplishes the jth step of Gaussian elimination). None of the Ejs is a permutation matrix, that is, no row interchanges are performed. The purpose of this exercise is to prove that this is possible (i.e. that Gaussian elimination can be applied without row interchanges) and to prove the following inequality:

\begin{displaymath}\max_{i,j,k=1,\ldots,n}\vert a_{ij}^{(k)}\vert \le \max_{i,j=1,\ldots,n}\vert a_{ij}\vert.
\end{displaymath}

Do this by proving the following three lemmas:
(a)
Let B be a symmetric positive definite matrix. Then bii > 0for $i = 1,\ldots,n$ and the largest entry of B (in magnitude) occurs on the diagonal.
(b)
Let A be a symmetric positive definite matrix, and suppose one step of Gaussian elimination is applied to A to obtain

\begin{displaymath}A^{(2)} = \left[\begin{array}{cc}a_{11}&a^T\\ 0&\hat{A}^{(2)}
\end{array}\right].
\end{displaymath}

Then $\hat{A}^{(2)}$ is also symmetric positive definite.
(c)
Using the notation of the previous lemma,

\begin{displaymath}\tilde{a}_{ii}^{(2)} \le a_{i+1,i+1},\ i=1,\ldots,n-1.
\end{displaymath}

Now complete the proof by induction. (Note that this result both proves that no partial pivoting is required for a symmetric positive definite matrix, and also that Gaussian elimination is perfectly stable when applied to such a matrix.)
15.
Let $A\in{\bf {\rm R}}^{m\times n}$ have SVD A=USVT, where $U\in{\bf {\rm R}}^{m\times m}$ and $V\in{\bf {\rm R}}^{n\times n}$are orthogonal and S is diagonal (Sij=0 if $i\neq j$). Define S(k) by

\begin{displaymath}S_{ij}^{(k)}=\left\{\begin{array}{ll}
0,& i=j\ge k,\\
S_{ij},&\mbox{otherwise},\end{array}\right.
\end{displaymath}

and define A(k) by

A(k)=US(k)VT.

What is $\left\Vert A-A^{(k)}\right\Vert _2$, where $\Vert\cdot\Vert _2$ is the matrix norm induced by the Euclidean ($\ell^2$) vector norm? Prove your answer.


next up previous
Next: Algebra Up: Optimization/Numerical Linear Algebra Previous: Outline
Math Dept Webmaster
2003-08-28