Universität Wien

250085 VO Tensor methods for data science and scientific computing (2022W)

6.00 ECTS (4.00 SWS), SPL 25 - Mathematik
VOR-ORT

An/Abmeldung

Hinweis: Ihr Anmeldezeitpunkt innerhalb der Frist hat keine Auswirkungen auf die Platzvergabe (kein "first come, first served").

Details

max. 25 Teilnehmer*innen
Sprache: Englisch

Prüfungstermine

Lehrende

Termine (iCal) - nächster Termin ist mit N markiert

  • Dienstag 04.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 05.10. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 11.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 12.10. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 18.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 19.10. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 25.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 08.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 09.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 15.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 16.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 22.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 23.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 29.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 30.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 06.12. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 07.12. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 13.12. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 14.12. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 10.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 11.01. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 17.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 18.01. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 24.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Mittwoch 25.01. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
  • Dienstag 31.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock

Information

Ziele, Inhalte und Methode der Lehrveranstaltung

This course will cover the basics of low-rank tensor decompositions, a modern computational tool for large-scale problems. Possible applications, which will be discussed in the course, are associated with such areas as data science, quantitative neuroscience, spectroscopy, psychometrics, arithmetic complexity and data compression; however, some of the most illustrative applications belong to the field of scientific computing. The course will first cover the canonical polyadic, Tucker, block-term and tensor-train decompositions from a linear-algebraic perspective and then focus on the use of low-rank tensor decompositions in computational mathematics. In the second part, the course will focus on the tensor-train (TT) decomposition, originally developed under the name of matrix product states (MPS) in computational quantum physics. This tensor decomposition appears naturally as a representation of functions from low-rank refinement in the construction of finite-element approximations and will be presented in this way in the course. In particular, in the context second-order linear elliptic problems, the low-rank approximation of functions, depending on their regularity, will be analyzed and state-of-the-art methods for preconditioning and solving optimality equations (linear systems) will be covered (including the construction, implementation and numerical analysis of such methods).

***

This course spotlights the intersection of two areas of modern applied mathematics:
* low-rank approximation and analysis of abstract data represented by multi-dimensional arrays
and
* adaptive numerical methods for solving PDE problems.
For the mentioned two areas, however seemingly disjoint, the idea of exactly representing or approximating «data» in a suitable low-dimensional subspace of a large (possibly infinite-dimensional) space is equally natural. The notions of matrix rank and of low-rank matrix approximation, presented in basic courses of linear algebra, are central to one of many possible expressions of this idea.

In psychometrics, signal processing, image processing and (vaguely defined) data mining, low-rank tensor decompositions have been studied as a way of formally generalizing the notion of rank from matrices to higher-dimensional arrays (tensors). Several such generalizations have been proposed, including the canonical polyadic (CP) and Tucker decompositions and the tensor-SVD, with the primary motivation of analyzing, interpreting and compressing datasets. In this context, data are often thought of as parametrizations of images, video, social networks or collections of interconnected texts; on the other hand, data representing functions (which often occur in computational mathematics) are remarkable for the possibility of precise analysis.

The tensor-train (TT) and the more general hierarchical Tucker decompositions were developed in the community of numerical mathematics, more recently and with particular attention to PDE problems. In fact, exactly the same and very similar representations had long been used for the numerical simulation of many-body quantum systems by computational chemists and physicists under the names of «matrix-product states» (MPS) and «multilayer multi-configuration time-dependent Hartree». These low-rank tensor decompositions are based on subspace approximation, which can be performed adaptively and iteratively, in a multilevel fashion. In a broader context of PDE problems, this leads to numerical methods that are formally based on generic discretizations but effectively operate on adaptive, data-driven discretizations constructed «online», in the course of computation. In several settings, such methods achieve the accuracy of sophisticated problem-specific methods.

***

The goal of the course is to introduce students to the foundations of modern low-rank tensor methods.
The course is to provide students with ample opportunity for starting own research.

Art der Leistungskontrolle und erlaubte Hilfsmittel

Oral examination with no aids («closed book»). Bonus points may be awarded for active participation and for work on optional projects and assignments.

Mindestanforderungen und Beurteilungsmaßstab

Prüfungsstoff

The theory and practice of the techniques covered in the course, as presented in the course.

Literatur


Zuordnung im Vorlesungsverzeichnis

MAMV

Letzte Änderung: Mo 17.04.2023 11:49