CMDA 3606 is a course devoted to variations on the equation Ax=b. Not only do we see how such equations arise from static equilibrium problems in circuits and structures; we will also consider a variety of inverse problems, for example, associated with image deblurring. Course topics include modeling linear systems; the singular value decomposition; matrix approximation, PCA, and recommender systems; regularized least squares probelms and imaging applications; solving large-scale linear systems via Krylov methods; unconstrained nonlinear optimization.
CRN 12561 |
Lectures: | Monday/Wednesday, 2:30-3:45pm, McBryde 230 |
Instructor: | Mark Embree, embree@vt.edu, 575 McBryde | |
Office Hours: | Monday 4-6pm; Wednesday 1-2pm; Thursday 1-2pm | |
CRN 12562 |
Lectures: | Tuesday/Thursday, 2:00-3:15pm, McBryde 240 |
Instructor: | Christopher Beattie, beattie@vt.edu, 552 McBryde | |
Office Hours: | Tuesday 4-6pm, Wednesday 4-6pm | |
Piazza: |
The course Piazza page provides a forum for class discussions (both sections). |
Overview for CMDA 3606:
Line search methods and step length conditions.
Stochastic gradient descent.
Reading: Chapter 3 of Nocedal and Wright, Numerical Optimization
Necessary and sufficient conditions for optimality.
Gradient descent: xk+1=xk−αk∇ϕ(xk).
Reading: Chapter 2 of Nocedal and Wright, Numerical Optimization
Preconditioning: Solve (M1AM2)(M−12x)=M1b instead of Ax=b.
Optimization intro: Taylor's Theorem in n-dimensions.
Basic line search methods and Newton's method.
Reading: Chapter 2 of Nocedal and Wright, Numerical Optimization
More details about GMRES, and basic convergence theory.
Krylov methods for regularization.
Preconditioning: Solve (M1AM2)(M−12x)=M1b instead of Ax=b.
Reading: course notes, Chapter 8: Iterations for Large Linear Systems (draft)
Introduction to the GMRES algorithm.
Reading: course notes, Chapter 8: Iterations for Large Linear Systems (draft)
Introduction to the solution of large Ax=b problems
Richardson's method: xk+1=xk+crk
Lewis Fry Richardson (Wikipedia)
Reading: course notes, Chapter 8: Iterations for Large Linear Systems (draft)
Deblurring in two dimensions.
Reading: course notes, Chapter 7: Inverse Problems and Regularization.
PCA and clustering.
Review for Midterm 2 (Thursday night).
Reading: course notes, Chapter 7: Inverse Problems and Regularization.
Principal Component Analysis via the SVD
Reading: course notes, Chapter 6: The Singular Value Decomposition.
Using the L-curve to choose a regularization parameter.
Statistics of least squares and regularization.
Reading: course notes, Chapter 7: Inverse Problems and Regularization.
Regularization via truncated SVD and Tikhonov regularization.
You can solve the Tikhonov regularization problem as a standard least squares problem by augmenting the matrix A.
Reading: course notes, Chapter 7: Inverse Problems and Regularization.
When A has small singular values, the pseudoinverse solution x=A+b can have large ‖x‖.
When rank(A)<min, small changes to A can change the pseudoinverse significantly.
Reading: course notes, Chapter 7: Inverse Problems and Regularization.
Solving all variety of Ax=b problems using the SVD.
The matrix A = \sum_{j=1}^r \sigma_j u_j v_j^T has pseudoinverse
A^+ = \sum_{j=1}^r {1\over \sigma_j} v_j u_j^T.
Reading: course notes, Chapter 7: Inverse Problems and Regularization.
Low-rank matrix approximation
Given the dyadic form of the SVD A = \sum_{j=1}^r \sigma_j u_j v_j^T, the
best rank-k approximation to A is A_k = \sum_{j=1}^k \sigma_j u_j v_j^T.
Illustration: image compression with low-rank matrices.
Reading: course notes, Chapter 6: The Singular Value Decomposition.
The SVD three ways:
The compact SVD, the full SVD, and the dyadic form of the SVD.
Reading: course notes, Chapter 6: The Singular Value Decomposition.
Construction of the Singular Value Decomposition (SVD).
Reading: course notes, Chapter 6: The Singular Value Decomposition.
Eigenvalues and eigenvectors of symmetric matrices.
The eigenvalues of a symmetric matrix are real.
The eigenvectors of a symmetric matrix associated with distinct eigenvalues are orthogonal.
Exam 1 returned and discussed.
Reading: course notes, Chapter 6: The Singular Value Decomposition.
Review first third of the course in advance of Exam 1.
Exam 1 will be held on Thursday night (28 Feb), 7-10pm.
(MW Section: McBryde 113; TR Section: McBryde 129).
Take the exam during the most convenient 100 (contiguous) minutes in 7-10pm.
Solve Least Squares problems using QR factorization
The Normal Equations A^TAx = A^Tb always have a solution.
If A is full rank, the pseudoinverse A^+ = (A^TA)^{-1}A^T gives the LS solution x = A^+b,
and the matrix AA^+ is a projector onto {\cal R}(A).
Given A=QR, one can solve the least squares problem via the (small!) systemn Rx = Q^Tb.
Reading: course notes, Chapter 5: Orthogonality.
Gram-Schmidt orthogonalization and the QR decomposition
The Gram-Schmidt process can be summarized as A = QR, where Q^TQ=I and R is upper triangular.
Reading: course notes, Chapter 5: Orthogonality.
Orthogonality and projectors
A square matrix P is a projector provided P^2=P.
A projector that is also symmetric (P^T=P) is called an orthogonal projector.
If v is a nonzero vector, then P = vv^T/v^Tv is an orthogonal projector onto {\rm span}\{v\}.
Reading: course notes, Chapter 5: Orthogonality.
Introduction to least squares problems and their solution
The solution to the least squares problem satisfies the Normal Equations A^TAx=A^Tb.
Note that {\cal N}(A^TA) = {\cal N}(A): hence the Normal Equations have a solution if {\cal N}(A)=\{0\}.
Reading: course notes, Chapter 4: Fundamentals of Subspaces.
Reading: course notes, Chapter 5: Orthogonality.
Fundamental Theorem of Linear Algebra
{\cal R}(A)\oplus {\cal N}(A^T) = \mathbb{R}^m and
{\cal R}(A^T)\oplus {\cal N}(A) = \mathbb{R}^n.
Reading: course notes, Chapter 4: Fundamentals of Subspaces.
Fundamentals of subspaces
Definition of subspace;
the null space {\cal N}(A) and range {\cal R}(A) are subspaces.
MATLAB codes:
plotline2.m,
plotplane2.m,
plotline3.m,
plotplane3.m.
Reading: course notes, Chapter 4: Fundamentals of Subspaces.
Modeling two dimensional structures with oblique supports.
Reading: course notes, Chapter 3: Simple Structures at Equilibrium.
Modeling two dimensional structures.
Using linear approximations of the elongation, again obtain A^TKAx=f.
Reading: course notes, Chapter 3: Simple Structures at Equilibrium.
Introduction to modeling simple (static) structures.
Different physics, same equation as the circuit model : A^TKAx = f
Reading: course notes, Chapter 3: Simple Structures at Equilibrium.
Review of the syllabus and discussion of notes/book options.
Be sure to note the evening midterm times: 28 February and 11 April from 7-10pm. (Exams last 100 minutes each.)
Reading: course notes, Chapter 1: Introduction.
Reading: course notes, Chapter 2: Linear Systems from Resistor Networks (draft).
Introduction to circuit modeling: A^TKAx = A^T K v
Posted 24 April 2019. Due 2 May 2019.
hw11.pdf: assignment
Posted 17 April 2019. Due 25 April 2019.
hw10.pdf: assignment
Posted 12 April 2019. Due 18 April 2019.
hw9.pdf: assignment
MATLAB codes:
blur2d.m,
hokiebird.mat,
mystery_plate.mat.
Posted 27 March 2019. Due 4 April 2019.
hw8.pdf: assignment
MATLAB codes:
coke_upc.m,
mystery_upc.p.
Posted 20 March 2019. Due 28 March 2019.
hw7.pdf: assignment
Posted 13 March 2019. Due 21 March 2019.
hw6.pdf: assignment
MATLAB codes:
hello.m,
cow.mat,
planck.mat.
Posted 1 March 2019. Due 7 March 2019.
hw5.pdf: assignment
MATLAB code:
hw5.mat.
Posted 13 February 2019. Due 21 February 2019.
hw4.pdf: assignment
MATLAB codes:
bridge.p,
bridge_A.mat.
Posted 7 February 2019. Due 14 February 2019.
hw3.pdf: assignment
MATLAB codes:
plotline2.m,
plotplane2.m,
plotline3.m,
plotplane3.m.
Posted 30 January 2019. Due 7 February 2019.
hw2.pdf: assignment
Posted 23 January 2019. Due 31 January 2019.
hw1.pdf: assignment
Download a copy of the electronic version of the syllabus and tentative schedule.
Final course grades will be thus allocated:
40%: problem sets
40%: midterm exams (evenings of 28 February and 11 April, 7-10pm)
20%: final exam (13 May: 2:05-4:05pm (MW section); 4:25-6:25pm (TR section)
Virginia Tech's Honor Code applies to all work in this course. Students must uphold the highest ethical standards, abiding by our Honor Code: "As a Hokie, I will conduct myself with honor and integrity at all times. I will not lie, cheat, or steal, nor will I accept the actions of those who do."
From the Office for Undergraduate Academic Integrity:
"Students enrolled in this course are responsible for abiding by the Honor Code. A student who has doubts about how the Honor Code applies to any assignment is responsible for obtaining specific guidance from the course instructor before submitting the assignment for evaluation. Ignorance of the rules does not exclude any member of the University community from the requirements and expectations of the Honor Code. For additional information about the Honor Code, please visit:
www.honorsystem.vt.edu."
We will not closely follow any one textbook; the instructors will provide notes for
much of the course.
The following books, all freely available online to Virginia Tech students,
provide helpful background information.
Any student with special needs or circumstances requiring accommodation in this course is encouraged to contact the instructor during the first week of class, as well as Virginia Tech's SSD Office. We will ensure that these needs are appropriately addressed.