Math 563 Scientific Computing II
Class hours: TTh 8:30-9:45 Physics 119
Scientific Computing (Local users only)
Course grade depends on homework assignments
All homework must contain
- a written description of the problem and numerical approach
- a copy of the code
- pictures or numbers for numerical results, and
- a discussion of the results (e.g., describe the accuracy,
convergence rate, ...)
To get online help with commands and utilities, there are 2 useful sources:
- yelp. Type "yelp &". When the window comes up, click on "Manual pages"
to see a list of commands ; "Applications" will contain most of the
commands you need. To find a particular command, pull down on the
scroll bar.
- kdbg. Type "kdbg &". When the window comes up, click on "Help".
To develop a webpage, see
OIT web info
Syllabus:
- Interpolation and Approximation
- Polynomial Interpolation
- well-posedness 1/10
- error estimation 1/10
- Newton interpolation 1/15
- Lagrange interpolation 1/15
- Hermite interpolation 1/15
- Runge phenomenon 1/15
- Chebyshev interpolation 1/15
- Bernstein polynomials 1/15
- Multidimensional Polynomial Interpolation 1/17
- multidimensional polynomials
- simplices
- quadrilaterals and octahedra
- error estimate
- Rational Polynomials 1/22
- Pade approximation
- continued fractions
- interpolation and approximation
- Quadric Surfaces 1/22
- Splines
- continuous 1/22
- continuously differentiable 1/22
- piecewise quadratics
- piecewise cubics or higher
- twice continuously differentiable 1/22
- piecewise cubics
- piecewise quartics
- piecewise quintics or higher
- error estimate 1/22
- tension splines 1/24
- Parametric curves 1/24
- Hermite curves
- Bezier curves
- rational splines
- Least Squares Approximation
- Norms and inner products 1/24
- normal equations 1/24
- orthogonal polynomials 1/24
- trigonometric polynomials 1/29
- orthogonality
- theory
- fast Fourier transform
- real FFT
- fast evaluation
- Wavelets
- discrete time signals
- example 1/31
- signals 1/31
- filters 1/31
- discrete Fourier transform 1/31
- continuous Fourier transform 1/31
- adjoints 1/31
- filter banks 1/31
- orthogonal filter banks 2/5
- maxflat orthogonal filters 2/5
- spline filters 2/5
- functions on a continuum
- scaling functions 2/7
- wavelets 2/7
- Riesz sequences 2/7
- orthonormal sequences 2/7
- multiresolution analysis 2/7
- orthogonal complements 2/7
- wavelet construction 2/7
- function decomposition 2/7
- error estimates 2/7
- Differentiation and Integration
- Numerical differentiation 2/12
- polynomials
- synthetic division
- trigonometric polynomials
- orthogonal polynomials
- one-sided differencing
- centered differencing
- Richardson extrapolation
- Numerical integration
- Monte Carlo 2/14
- Riemann sums 2/14
- midpoint rule 2/14
- trapezoidal rule 2/14
- Euler-MacLaurin formula 2/14
- Romberg integration 2/19
- Gaussian quadrature 2/19
- Lobatto quadrature 2/19
- Newton-Cotes quadrature 2/19
- difficult integrals 2/21
- adaptive quadrature 2/21
- multiple dimensions 2/26
- Initial value problems
- theory 2/28
- linear equations with constant coefficients 2/28
- matrix exponentials
- appoximate matrix exponentials
- linear multistep methods
- consistency 3/5
- characteristic polynomials 3/5
- zero stability 3/5
- absolute stability 3/5
- Adams-Bashforth methods 3/7
- Adams-Moulton methods 3/7
- Nystrom methods 3/19
- backward differentiation formulae 3/19
- predictor-corrector methods 3/19
- choosing step sizes 3/19
- deferred correction
- classical deferred correction 3/21
- spectral deferred correction 3/21
- Runge-Kutta methods 3/26
- methods with error-estimation
- stiffly stable methods
- stiffness 3/28
- nonlinear stability 3/28
- Boundary value problems
- steady states 4/2
- ill-posed problems 4/2
- shooting methods 4/2
- finite difference methods 4/4
- finite element methods
- weak form 4/9
- Galerkin methods 4/9
- basic principles
- example
- natural boundary conditions 4/11
- existence and uniqueness 4/11
- energy minimization 4/11
- energy error estimates 4/11
- finite element method
- continuous splines 4/11
- continuously differentiable splines 4/11
- well-posedness 4/16
- Sobolev spaces
- self-adjoint problems
- regularity
- nonself-adjoint problems
- error estimates 4/18
- numerical quadrature 4/18
- condition numbers 4/23
- hierarchical elements 4/23
- mixed finite elements 4/25
Homework:
- due January 15
Problem 4, section 10.2
- due January 17
Write a program to interpolate the function
f(x,y) = cos(x) sin(y)
at 16 equally-spaced points in the square (-pi,pi) x (-pi,pi),
using multi-dimensional polynomials that are cubics in each coordinate.
Plot the error in the function value, and discuss how the error
compares with the theory in section 10.3.4.
- due January 22
Problem 6 of section 10.6.1 (just before Continuously Differentiable
Splines).
- due January 24
Write a program to compute a Bezier curve in two segments such that
the curve is continuously differentiable between the segments.
Use the program to draw the letter "S" as two letter "C"s.
- due January 29
Problem 2, exercises 10.7.
I suggest that you first apply FFT routine to 1, cos x, sin x, cos 2x,
and sin 2x to understand how the Fourier coefficients are being stored.
- due January 31
Discuss how to design a ``class'' for a lowpass / highpass filter
bank.
What operations would the filter bank perform on discrete time signals?
What data would the class need in order to perform its operations?
If you are unfamiliar with object-oriented programming and class
design, please read section 3.12 of the class notes.
Note that many modern programming languages use classes to achieve
object-orientation, including C++, Java, Python and JavaScript.
- due February 7
Program the maxflat biorthogonal filter bank, and use it
to analyze the exponential function on the interval [0,1], using at most
512 coefficients in the expansion, and nine levels of analysis.
Then synthesize the output from the analysis bank and show that you
can recover the original data.
- due February 12
Problem 2, exercises 11.1.4 (Richardson extrapolation to compute
second derivatives).
Show that your extrapolation obtains the maximum attainable accuracy
by plotting the log of the error versus minus the log of h for each
second derivative.
- due February 14
Program Monte Carlo integration to compute the area of the region
x^2 + 2 y^2 less than or equal to 1, by integrating the characteristic
function of this region over the unit square.
Plot the log of the error versus the log of the number of integration
points.
Comment on how the slope of this plot compares to the theory.
- due February 19
In order to approximate a function by a scaling function and wavelet,
it is useful to compute the integrals of the function times the dual
scaling function, as in the section on Function Decomposition and the
equation on the top of page 599 of the class notes.
Typically, the dual scaling function is computable only at dyadic
points, available by the refinement equation for the dual scaling
function.
Describe how you would approximate these integrals for general filter
bank indices k, in a way that is consistent with the order of
approximation for the scaling functions.
For the Daubechies maxflat orthogonal filter bank with index k,
the error estimates say that the approximation should be order k.
Then program the Daubechies maxflat orthogonal filter for $k=2$ and
use it to approximate the exponential function on the interval [0,1].
Approximate this function at the scale 2^{-6}, and use the refinement
equation for the scaling function to determine the coefficients for this
same approximation at dyadic points on scales from $2^{-7}$ to
$2^{-12}$.
Then compute the errors for each of these at dyadic points at scale
$2^{-13}$ and show that the error in your approximation has the
correct order.
You may use the C++ classes in
DaubechiesFilterBank.C
DaubechiesFilterBank.H
ScalingFunction.C
ScalingFunction.H
Wavelet.C
Wavelet.H
Filter.C
Filter.H
Signal.C
Signal.H
and the test program in
testDaubechiesFilterBank.C
to provide code for the Daubechies filter bank, scaling functions and
so on.
- due February 21
Problem 1, section 11.2.7 (Gaussian quadrature for Gamma function).
- due February 26
Write a program to compute the integral of e^{x y} on the unit square
[0,1] x [0,1] with the maximum attainable accuracy.
Plot the log of the error versus the log of the number of quadrature
points to demonstrate that your code is working properly.
Describe how you chose your method, and how you know that your method
is working properly.
- due February 28
Construct a Pade approximation to the exponential, using a quotient of
two quadratics.
Find the roots of the quadratic in the denominator.
Use the two quadratics and the roots of the denominator quadratic
to describe a numerical method for solving an initial-value problem
with constant coefficients.
What order do you expect this method to have?
(You do not need to program the method.)
- due March 5
Consider the differential equation y'(t) = lambda y(t) with y(0) given,
where lambda is complex.
Write down the analytical solution.
How does this solution behave if the real part of lambda is positive,
negative, or zero?
Next, consider Euler's method and the backward Euler method for this
problem.
Under what conditions are each of these methods zero-stable?
Under what conditions are each of these methods absolutely stable?
Which of these methods is A-stable?
Which of these methods is A(0)-stable?
Which of these methods is A_0-stable?
If the real part of lambda is negative, do we want the numerical
method to be absolutely stable?
If the real part of lambda is positive, do we want the numerical
method to be absolutely stable?
- due March 7
There are a number of test problems for initial value problem solvers
at
John Burkardt web page
Problem 6 on this web page is y'(t) = f(y(t)), where y and f are
2-vectors.
In this case, f_0 = 2 y_0 ( 1 - y_1 ) and f_1 = - y_1 ( 1 - y_0 ).
The initial condition is y_0(0) = 1 and y_1(0) = 3.
Program Euler's method and the second-order Adams-Bashforth method for
this problem.
How should you choose the timestep size for each?
Plot y_1(t) versus y_0(t) for t in [0,10] for different timestep
sizes, and discuss how accurate each of these two methods is.
- due March 19
- First, choose whether to use
EPSODE (Fortran) or
CVODE (C) to
execute linear multistep methods with variable order and stepsize.
- Download your choice and get it to compile and execute a test
problem.
- Describe how your code get started: how does it compute the
solution at the early steps when it is trying to build a table
that is appropriate for high-order methods.
- Describe how your code selects the timestep size.
- Describe how your code selects the order of the multistep method.
- Solve y'(t)=z(t), z'(t)=1000(1-y(t)^2)z(t)-y(t)
with y(0)=2, z(0)=0 for t between 0 and 3000.
This problem has internal boundary layers, and the initial condition
is nearly on the limit cycle, which has a period of about 1615.5.
Use EPSODE to solve this problem, and examine how the
numerical solution depends on the choice of error tolerance.
You can find some discussion of this problem in Shampine,
"Evaluation of a Test Set for Stiff ODE Solvers",
ACM Transactions on Mathematica Software v. 7 #4 (1981) pp 409-420.
- due Thursday, March 21
Do the exercise on the driven pendulum at the end of the section on
Predictor-Corrector methods.
(I would have preferred the RLC circuit problem after the pendulum
problem, but the homework deadline is too short.)
- due Tuesday, March 26
Program the deferred correction algorithm to solve the homework problem
for March 7.
Use a second-order Runge-Kutta scheme (such as the modified Euler
method, improved Euler method, or the SDIRK scheme) to solve the
ordinary differential equations.
Choose parameters in the deferred correction scheme to achieve order 12,
and describe how you chose those parameters.
Also describe how you chose your timestep for the deferred correction
method.
Perform a mesh refinement study to verify that your method achieves the
desired order.
You may use the code in
deferred_correction.f
and
GUIDeferredCorrection.C
to help you.
- due Thursday, March 28
Determine the fixed points of period one and two for the second-order
SDIRK scheme applied to the logistics equation, and explain which are
stable.
- due Tuesday, April 2
Program the shooting method for the Bratu problem, using an explicit
second-order initial value problem solver.
Then perform a refinement study to determine how the error in the
initial slope converges with the timestep size for the initial value
problem solver.
- due Thursday, April 4
Problem 2, end of section 13.4 (Finite Difference Methods).
- due Tuesday, April 9
Problem 4, section 13.5.2 (Galerkin Methods); problem involves
delta-function right-hand side for two-point boundary value problem.
- due Thursday, April 11
Problem 4, p. 817: beam-bending.
Return to: John A. Trangenstein *