## Keynote 1: Erin Claire Carson

**Monday, June 17, 9.00-10.00**

Title: **Balancing Inexactness in Large-Scale Matrix Computations**

On supercomputers that exist today, achieving even close to the peak performance is incredibly difficult if not impossible for many applications. Techniques designed to improve the performance of matrix computations – making computations less expensive by reorganizing an algorithm, making intentional approximations, and using lower precision – all introduce what we can generally call “inexactness”. The questions to ask are then:

1. With all these various sources of inexactness involved, does a given algorithm still get close enough to the right answer?

2. Given a user constraint on required accuracy, how can we best exploit and balance different types of inexactness to improve performance?

Studying the combination of different sources of inexactness can thus reveal not only limitations, but also new opportunities for developing algorithms for matrix computations that are both fast and provably accurate. We present a few recent examples of this approach, in which mixed precision computation is combined with other sources of inexactness.

## Keynote 2: Alex Townsend

**Monday, June 17, 13.30-14.30**

Title: **Rank-revealers must use near-local maximum volume pivoting**

Attend any numerical linear algebra conference these days and you will likely find a large fraction of talks about matrix compression, singular value estimation, and column subset selection. In this talk, we will study algorithms called rank-revealers that reveal a matrix’s rank structure, which provides theoretical guarantees on the quality of low-rank approximants and condition number bounds on bases for column space and nullspace. We show that the concept of near-local maximum volume pivoting is a crucial and practical pivoting strategy for rank-revealers based on Gaussian elimination (GE) and QR, showing that it is both necessary and sufficient. This insight elevates Gu and Eisenstat’s rank-revealing QR as an archetypal rank-revealer, and we design a metric to measure the success of any pivoting strategy. This partially explains the popularity of column-pivoted QR and GE with complete pivoting as rank-revealers.

## Keynote 3: Iveta Hnětynková

**Tuesday, June 18, 9.00-10.00**

Title: **Multilinear approximation problems: Variants, solvability analysis and extraction of core data**

In many areas, there is a need to solve multidimensional approximation problems AX ≈ B of various forms. Such problems suffer from the presence of errors and irrelevant or redundant data in B and/or A. Additional difficulties are related to the fact that the individual observations (columns of B) can be correlated with different subsets of columns of A. Least squares methods (such total or scaled least squares) are widely used here, where generally a minimum norm data correction is searched ensuring existence of the approximate solution X. However, such a correction may not always exist.

Irrelevant and redundant data can be removed by the core reduction introduced in [8] and [9] for single-observation problem. This yields a minimally dimensioned subproblem called the core problem with better solvability properties. Recently, the core reduction was extended to multiple-observation problems [4], bilinear, tensor or other highly structured problems [5], [6]. The reduction can be direct (using the SVD or Tucker decomposition) or iterative, based on the block generalization of the Golub–Kahan bidiagonalization [1]. The recent paper [7] gives a unifying universal Krylov subspace approach yielding core data with a band model matrix and orthogonal observation matrix.

In this talk, we summarize various formulations of multilinear approximation problems and analyze their solvability. Then we turn to core data reduction. Direct as well as iterative approach will be explained in details, properties of the resulting minimal core problem will be studied. We give also some notes on difficulties appearing in FPA computations.

[1] G. H. Golub, W. Kahan, Calculating the singular values and pseudo-inverse of a matrix. SIAM J. Numer. Anal. Ser. B 2 (1965), pp. 205–224.

[2] G. H. Golub, C. F. Van Loan. An analysis of the total least squares problem. SIAM J. Numer. Anal., 17(6) (1980), pp. 883–893.

[3] I. Hnětynková, M. Plešinger, Z. Strakoš. The core problem within linear approximation problem AX ≈ B with multiple right-hand sides. SIAM J. Matrix Anal. and Appl., 34(3) (2013), pp. 917–931.

[4] I. Hnětynková, M. Plešinger, Z. Strakoš. Band generalization of the Golub–Kahan bidiagonalization, generalized Jacobi matrices, and the core problem. SIAM J. Matrix Anal. and Appl., 36(2) (2015), pp. 417–434.

[5] I. Hnětynková, M. Plešinger, J. Žáková. TLS formulation and core reduction for problems with structured right-hand sides. Linear Algebra and its Appl., 555 (2018) pp. 241–265.

[6] I. Hnětynková, M. Plešinger, J. Žáková. On TLS formulation and core reduction for data fitting with generalized models. Linear Algebra and its Appl., 577 (2019), pp. 1–20.

[7] I. Hnětynková, M. Plešinger, J. Žáková. Krylov Subspace Approach to Core Problems within Multilinear Approximation Problems: A Unifying Framework SIAM J. on Matrix Anal. and Appl., 44 (2023), pp. 53–79.

[8] C. C. Paige, Z. Strakoš. Scaled total least squares fundamentals. Numerische Mathematik, 91 (2006), pp. 117–146.

[9] C. C. Paige, Z. Strakoš. Core problem in linear algebraic systems. SIAM J. Matrix Anal. and Appl., 27(3) (2006), pp. 861–875.