Research
Sparsitute is organized as an integrated effort focusing on three pillars:
sparse matrices, sparse networks, sparse tensors.
Figures: a power network graph (top) and its associated matrix structure (bottom).
Sparsitute will treat the sparse matrices in a very broad sense:
The matrices can come from discretized PDEs for modeling and simulations which have structural sparsity; They can come from data analysis, ML and AI, such as kernel matrices and covariance matrices which have low-rank data-sparsity; They can come from highly irregularly structured networks such as transportation and biological networks.
A commonality is that all of them can be represented with asymptotically smaller than N^2 nonzero elements.
Our goals of the sparse matrix pillar are:
- develop unified matrix bases representations and the associated kernels and
primitives to support a variety of high level algebraic operations;
- develop new algebraic solvers and preconditioners particularly targeting at
various numerical optimization problems;
- develop sparse linear algebra algorithms to speed up training in AI/ML applications; and
- perform communication lower bound analysis for the above sparse operations and develop new communication efficient algorithms.
Tensors model multi-way relationships, such as interactions among sets of three or more entities. When these relationships are rare or rarely observed, the tensor that represents the relationships is sparse. In some applications, the tensor represents data, and in other applications, the tensor represents a multi-linear operator. The goals of the sparse tensor pillar are to
develop new algorithms for decomposition and completion of tensors;
pursue new algorithms and software for solving tensor-structured systems of equations; and
optimize a set of key computational kernels that underlie scalable algorithms for sparse tensors
Modeling connected and sparse data often involves graphs, hypergraphs, and simplicial complexes. We use the term “networks” to refer to this set of tools as models when studying more complicated systems. The goal of this pillar is to advance our understanding of what happens when using these tools
where problems do not reduce to matrices and tensors over the real or complex fields – such as in topological data analysis,
where such reductions cause an inefficient runtime or parallelization potential,
where the data meaning, i.e. temporal data, require custom or specialized approaches, or
where model flexibility allows us to induce sparsity in models, like in machine learning