Low-rank matrices (and tensors) appear often in applications, and this can
be exploited in several ways to treat large scale problems that would be otherwise
unfeasible. It has been known since a long time that in some applications, such
as the solution of the Sylvester equation AX + XB + C = 0, working with
low-rank C often leads to a solution X which is full rank, but can be efficiently
approximated by a low-rank matrix. This can be used, among other things, in
solving PDEs defined by separable operators, such as the 2D Laplace operator
on rectangular domains.
We briefly review some of these results, and the connection with some ra-
tional approximation problems. We show that these structures appear in a
multitude of interesting cases. For instance, the same properties can be found
when modifying the coefficients of linear matrix equation by performing low-
rank updates to their coefficients, and the theory can be extended to more
general matrices which have off-diagonal blocks of low-rank.
Time permitting, we discuss some recent developments that allow to prove
the existence of a low-rank structures in the action of functions of matrices that
have a Kronecker sum structure, such as x = f (A⊗I + B^T⊗I)v, for particularly
structured vector v. An efficient algorithm for the approximation of the vector
x, reshaped in matrix form, is given. This founds applications in extending
the ideas used for the Laplace operator and 2D PDEs to more general nonlocal
operators, such as the fractional Laplacian.