Travel Directions
Gene Golub SIAM Summer School 2010  
International Summer School on Numerical Linear Algebra  
Hotel Sierra Silvana, Selva di Fasano, Brindisi (Italy), June 7-18, 2010  




Minimizing communication in numerical linear algebra
James Demmel, University of California at Berkeley, USA.

There are two main costs in any algorithm: arithmetic and communication, where communication means moving data, either between levels of a memory hierarchy (like cache and main memory) on a sequential computer, or between processors over a network. Arithmetic is much faster than communication, and the speed difference is growing exponentially with time, so designing fast algorithms means minimizing communication. We review recent work both in establishing lower bounds on the amount of communication needed for many linear algebra operations (both dense and sparse, both sequential and parallel), and algorithms that attain these bounds, many of which do asymptotically less communication than their conventional counterparts.




Nonlinear eigenvalue problems: analysis and numerical solution

Volker Mehrmann Technische Universitat Berlin, Germany.


Nonlinear eigenvalue problems arise in a huge number of applications ranging from classical mechanics, i.e. the vibration of structures or vehicles, acoustics, to the simulation of quantum dots. In this course several major applications and the basic theory of nonlinear eigenvalue problems will be discussed. This includes linearization techniques for polynomial eigenvalue problems that have recently been revised in the particular context of preserving given structures.    We then discuss the most important techniques for the numerical solution of polynomial and general nonlinear eigenvalue problems as well as the conditioning and perturbation theory.





From Matrix to Tensor: The Transition to Computational Multilinear Algebra

Charles Van Loan, Cornell University, Ithaca, New York, USA.


High-dimensional modeling is becoming ubiquitous across the sciences and engineering creating a demand for informative tensor-based computation. A tensor can be regarded as a multi-subscripted matrix. What does it mean to say that an n1xn2xn3 tensor has low rank and how might it be estimated? Can we reshape a 2100 vector into a tensor that has a sparse representation? Does the singular value decomposition of a flattened tensor tell us anything about the original data set? All these questions lead to challenging problems in numerical linear algebra that will be covered in this short course. Our goal is to portray tensor computation as the next big thing in scientific computing, where reasoning about algorithms has already progressed from the level of scalars to the level of matrices to the level of block matrices.



Linear Algebra and Optimization

Margaret H. Wright, Courant Institute,   New York University, USA.


For more than 60 years, starting with the simplex method for linear programming, the heart of many computational optimization methods has consisted of one or more linear algebraic subproblems. Very often these are linear systems, symmetric and unsymmetric, ranging from small to large and dense to sparse. In addition, some optimization techniques depend on linear least squares, calculation or estimation of eigenvalues/vectors and singular values/vectors, and updating matrix factorizations. This course will examine the highlights of two-way connections between linear algebra and optimization, including recent applications in large-scale simulation and theoretical computer science.