This is the homework website of Harrison Potter, 3rd year mathematics graduate student, for MATH 225: Scientific Computing II, which is taught by Dr. John Trangenstein. It contains everything that I have created for each homework assignment in bulk form, accessed via hyperlink. Let me know if you see ways in which I can improve my code.
Harrison Potter
1/14/2011
Code written for Math 225: Scientific Computing II
Instructor: John Trangenstein
Assignment Due Wednesday January 19th
Problem 3 in Section 9.2 from his online book
Problem Statement:
3.) Suppose you are to make a table of values of log(x), 1/2 <= x <= 3/2, on an equally spaced mesh of
width 1/(N+1), for some positive integer N. Assume that polynomial interpolation will be applied
to the table, and that the maximum error in interpolation is to be at most 10^(-6).
What value of N should you choose?
To answer this question, write a computer program to
a.) determine the interpolating polynomial p(x) for arbitrary values of the number of interpolation points
b.) evaluate the error in the interpolation at arbitrary points x in [1/2, 3/2]
c.) plot the error log(x)-p(x) as a function of x
d.) determine the maximum error at points x_j = 1/2 + j/[100(N + 1)] for 0 <= j <= 100(N + 1)
e.) plot the logarithm of the maximum error as a function of N.
Outline of Solution:
I write a program in C++ to implement the algorithm given after Example 9.2-8
in Dr. Trangenstein's online textbook, presently page 306, for determining the Newton interpolating polynomial
of a function.
There are N interpolating points at 1/2+j/(N+1) for j in 1,2,3,...,N (endpoints not included).
These are stored in a vector xn.
A vector xpts contains these points, the endpoints, and 99 additional points equally spaced between
each pair of successive previously included points.
The vector xpts is used for plotting and for approximating the L-infinity norm of [p(~)-log(~)], which gives
a measure of the accuracy of our interpolating polynomial in approximating the logarithm.
The polynomial coefficients are stored in the vector gamma.
Plots are created by writing data to a text file, formatted so that gnuplot can read
the data from the text file and produce a plot.
My ability to perform tasks a and b specified in the problem statement is demonstrated by
my ability to produce the plot required for part c.
This data is stored in jNPIc.txt
This plot is stored in jNPIc.ps
My ability to perform task d specified in the problem statement is
demonstrated by my ability to produce the plot required for part e.
This data is stored in jNPIe.txt
This plot is stored in jNPIe.ps
This plot is then used to reach a conclusion regarding how large N must
be in order to achieve the desired accuracy.
Conclusion:
The convergence in the L-infinity norm of the interpolating polynomial
as a function of N is exponential up to the point where numerical errors
in the calculations begin to cause a deterioration in the accuracy of
the final result.
An optimum choice for N would be 25, which would result in a maximum error
of about 10^(-11).
If such accuracy is not needed, then a maximum error of just under 10^(-6)
can be obtained with N as small as 13.
ANSWER: Let N=13.
Regarding the plots produced for part c:
Near the boundaries of the region the error becomes much larger
than it is near the middle of the region.
The error plot
looks to be a polynomial of order N, whereas the actual approximation
is a polynomial of order N-1.
Harrison Potter
2/17/2011
Code written for Math 225: Scientific Computing II
Instructor: John Trangenstein
Assignment Due Wednesday January 26th
Problem 1 in Section 9.3.5 from his online book
Problem Statement:
1.) Write a program to:
a.) Compute the piecewise Hermite cubic interpolant to
f(x)=1/(x^2+25)
at equidistant nodes on [-1,1].
b.) Plot f and the interpolant for 2 and 3 nodes on [-1,1].
c.) By sampling the error at 100 points between each node,
determine a computer-generated estimate for the interpolation error.
d.) Plot the log of the error versus the log of the mesh width.
Outline of Solution:
The interpolation can be set up using the basis functions w_0(y) and w_1(y)
given in section 9.4.4 of Dr. Trangenstein's online text. He provides the
procedure for evaluating the spline at any interior point. From here it's
just a matter of producing the plots and performing the appropriate tasks.
Key Point:
The derivative terms in the online book were missing a factor that led to the derivative
of the interpolating piecewise cubic not matching the derivative of the function,
even though the actual values of the interpolant at the interpolating points
matched just fine. The actual Hermite piecewise cubic is a very good one - it does
not oscillate wildly as the number of interpolating points grows.
Harrison Potter
5/4/2011
Code written for Math 225: Scientific Computing II
Instructor: John Trangenstein
Assignment Due Monday February 7th
Problem Statement:
1.) Do the following:
a.) Write a program to compute the Legendre polynomials
of order at most n at a given point x.
b.) Write a program to find all of the zeros of the
Legendre polynomials of order at most n.
c.) Make a table of the zeros for all orders between 1 and 10.
HINT:
The zeros of p_k are real, between -1 and 1, and interlace the zeros
of p_{k-1}: the first is between -1 and the first zero of p_{k-1},
the next is between the first 2 zeros of p_{k-1},... and the last
is between the last zero of p_{k-1} and 1.
Harrison Potter
5/9/2011
Code written for Math 225: Scientific Computing II
Instructor: John Trangenstein
Assignment Due Monday March 14th
Problem Statement:
1.) Do the following:
i.) Program an adaptive quadrature routine as in the online textbook,
where the input is a user-defined function f and a relative error
tolerance epsilon.
ii.) Apply this adaptive quadrature routine to a number of different functions,
counting the number of function evaluations in each case and using a relative
error tolerance of epsilon=10^-10.
a.) [1,3] for e^{2x}sin(3x)
b.) [1,3] for e^{3x}sin(2x)
c.) [0,5] for 2xcos(2x)-(x-2)^{2}
d.) [0,5] for 4xcos(2x)-(x-2)^{2}
e.) [0.1,2] for sin(1/x)
f.) [0.1,2] for cos(1/x)
g.) [0,1] for x^{1/2}
h.) [0,1] for (1-x)^{1/2}
i.) [0,1] for (1-x)^{1/4}
Outline of Solution:
The best way I found to compute a solution reliably with a reliable error bound is as
follows. Start from the roughest approximatin and work your way to finer scales recursively.
The trick is working out a good stopping condition. I tried several different ways.
I settled upon bounded the error in each iteration by a fixed number epsilon, subject
the the multiplicative factor in the derivation of the stopping condition outlined in
the online textbook. By keeping track of the number of function evaluations used I
could determine the number of times this error bound was used. The total error
is then bounded by the value of this fixed error bound, which is what I called
epsilon, times the number of such bounds used in the final result, which is half
the total number of intervals used in the final result.
In the end I have a reliable error bound, but one that depends upon the number of intervals
that end up being a part of the final result, which something that can't be known ahead of
time because of the recursion. If we set epsilon=10^-16, however, we obtain results that
are accurate to within 10^-10 for all of the functions we used, with a relative error typically
well below the desired 10^-10 threshold.
Results:
Table containing results.
Code:
Main C++ code
GNUmakefile