1er Symposium canadien en analyse numérique et calcul scientifique
Lundi, 17 juin
Salle Des Plaines C
Implicit Dealiasing of Convolutions over Distributed Memory
Bowman, John C. (1) and Malcolm Roberts (2)
(1) University of Alberta, (2) Université d'Aix-Marseille

An efficient algorithm is described for calculating dealiased linear convolution sums without the expense of conventional zero-padding or phase-shift techniques. For one-dimensional in-place convolutions, the memory requirements are identical with the zero-padding technique, with the important distinction that the additional work memory need not be contiguous with the data. This decoupling of the data and work arrays dramatically reduces the memory and computation time required to evaluate higher-dimensional in-place convolutions. The technique also allows for the efficient dealiasing of the hyperconvolutions that arise on Fourier transforming cubic and higher powers. We discuss recent advances in the parallel implementation of implicit dealiasing that exploit the possibility of overlapping computation with communication.

A Paradox of Numerical Stability and Ill-Conditioning
Trummer, Manfred and Sarah E. Reimer
Simon Fraser University

Higher order methods for solving differential equations are, in general, highly sensitive to perturbations in data.  We investigate a radial basis function method, which provides accurate results but does so while computing with ill-conditioned (almost singular) matrices. We provide evidence that in general perturbations are not nearly as strongly amplified as suggested by the condition numbers of the matrices and explain why stability is maintained.

Extending Preconditioned GMRES to Nonlinear Optimization
De Sterck, Hans
Department of Applied Mathematics, University of Waterloo

Preconditioned GMRES is a powerful iterative method for linear systems. In this talk, I will show how the concept of preconditioned GMRES can be extended to the general class of nonlinear optimization problems, using genuinely nonlinear preconditioners. I will present a nonlinear GMRES (N-GMRES) optimization method, which combines preliminary iterates generated by a stand-alone simple optimization method (the nonlinear preconditioner) to produce accelerated iterates in a generalized Krylov space. The nonlinear acceleration process, which is closely related to existing acceleration methods for nonlinear systems that include so-called Anderson acceleration, is combined with a line search in every step to obtain a general nonlinear optimization method for which global convergence can be proved when steepest-descent preconditioning is used. Numerical tests show that N-GMRES is competitive with established nonlinear optimization methods, and outperforms them for a difficult canonical tensor approximation problem when an advanced nonlinear preconditioner is used. This suggests that the real power of N-GMRES may lie in its ability to use powerful problem-dependent nonlinear preconditioners. Extension of these ideas to nonlinear preconditioning for the nonlinear CG optimization method is also discussed, and results are presented showing the effectiveness of the approach.

A New Family of Matrices for Computation
Stenger, Frank and Fernando Guevara Vasquez
University of Utah, Salt Lake City

We start with an interpolation method for interpolating a function $f$ on an interval (a,b), i.e., \[ p(x) = \sum_1^n b_k(x) f(x_k),\] which could be the usual formula for polynomial interpolation, or for trigonometric polynomial interpolation, or for spline function interpolation, or for rational function interpolation, or for interpolation by Sinc methods, etc., and where $a < x_1 <  ... < x_n < b$. We assume that this interpolation results from using basis functions that are orthogonal over $(a,b)$ with respect to a weight function $w$ that is positive a.e. on $(a,b)$. For example for the case of Chebyshev polynomials, take the interval $(-1,1)$ and weight function $(1-x^2)^{-1/2}$, so that the $x_j$ are the zeros of the Chebyshev polynomial $T_n(x)$. We investigate properties and use of the matrices $B$ with entries $b_{jk} = \int_a^{x_j} w(x) b_k(x) dx$, and $C$ with entries $c_{jk} = \int_{x_j}^b w(x) b_k(x) dx$. These matrices enable new and simple methods of arbitrary accuracy for approximate integration, differentiation, solving differential equations, inverting Laplace transforms, etc.