Universität Wien
Achtung! Das Lehrangebot ist noch nicht vollständig und wird bis Semesterbeginn laufend ergänzt.

250081 VU Tensor Methods for Data Science and Scientific Computing (2024W)

7.00 ECTS (4.00 SWS), SPL 25 - Mathematik
Prüfungsimmanente Lehrveranstaltung

An/Abmeldung

Hinweis: Ihr Anmeldezeitpunkt innerhalb der Frist hat keine Auswirkungen auf die Platzvergabe (kein "first come, first served").

Details

max. 25 Teilnehmer*innen
Sprache: Englisch

Lehrende

Termine (iCal) - nächster Termin ist mit N markiert

The course is organized in the form of sessions of two types.

(i) LECTURE SESSIONS (typically three academic hours a week)
will cover mostly theoretical material.
The lectures will consist in the comprehensive chalkboard-style presentation of theoretical material.

(ii) EXERCISE SESSIONS (typically one academic hour a week)
will revisit the methods and techniques covered in (i),
focusing on the practical aspects and implementation thereof
as well as on homework assignments.
Relevant demonstration code will be made available to the registered students (via Moodle).

  • Donnerstag 03.10. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 07.10. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 10.10. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 14.10. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 17.10. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 21.10. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 24.10. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 28.10. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 31.10. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 04.11. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 07.11. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 11.11. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 14.11. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 18.11. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 21.11. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 25.11. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 28.11. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 02.12. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 05.12. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 09.12. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 12.12. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 16.12. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 09.01. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 13.01. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 16.01. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 23.01. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Montag 27.01. 11:30 - 13:00 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock
  • Donnerstag 30.01. 09:45 - 11:15 Seminarraum 9 Oskar-Morgenstern-Platz 1 2.Stock

Information

Ziele, Inhalte und Methode der Lehrveranstaltung

This course is a rigorous introduction to several key techniques for the low-rank approximation of tensors (multidimensional arrays), aimed at providing students with basic knowledge and ample opportunity for starting own research in the area. Possible applications, which will be discussed in the course, are associated with such areas as data science and machine learning, quantitative neuroscience, spectroscopy, psychometrics, arithmetic complexity and data compression. Some of the most illustrative applications, however, belong to the field of scientific computing.

The course will first cover the canonical polyadic, Tucker and tensor-train decompositions of multidimensional arrays from a linear-algebraic perspective and then focus on the use of low-rank tensor decompositions in computational mathematics. In the second part, the course will focus on the tensor-train (TT) decomposition, originally developed under the name of matrix product states (MPS) in computational quantum physics. This tensor decomposition appears naturally — as a representation of functions — from low-rank refinement in the construction of finite-element approximations. That is crucial in the context of PDE problems, in which the low-rank approximation of functions, depending on their regularity, will be analyzed and state-of-the-art methods for preconditioning and solving optimality equations (linear systems) will be covered (including the construction, implementation and numerical analysis of such methods). Homework assignments will involve theoretical and implementation tasks. For implementation tasks, every student can use any programming language or environment and the lecturer will present solutions in the form of Jupyter notebooks in Julia.

***
The course spotlights the interplay of two areas of modern applied mathematics:
* low-rank approximation and analysis of abstract data represented by multi-dimensional arrays
and
* adaptive numerical methods for solving PDE problems.

In psychometrics, signal processing, image processing and data mining, low-rank tensor decompositions have been studied as a way of formally generalizing the notion of rank from matrices to higher-dimensional arrays (tensors). Several such generalizations have been proposed, including the canonical polyadic (CP) and Tucker decompositions and the tensor-SVD, with the primary motivation of analyzing, interpreting and compressing datasets. In this context, data are often thought of as parametrizations of images, video, social networks or collections of interconnected texts; on the other hand, data representing functions (which often occur in computational mathematics) are remarkable for the possibility of precise analysis.

The tensor-train (TT) and the more general hierarchical Tucker decompositions were developed in the community of numerical mathematics, more recently and with particular attention to PDE problems. In fact, exactly the same and very similar representations had long been used for the numerical simulation of many-body quantum systems by computational chemists and physicists under the names of «matrix-product states» (MPS) and «multilayer multi-configuration time-dependent Hartree». These low-rank tensor decompositions are based on subspace approximation, which can be performed adaptively and iteratively, in a multilevel fashion. In a broader context of PDE problems, this leads to numerical methods that are formally based on generic discretizations but effectively operate on adaptive, data-driven discretizations constructed «online», in the course of computation. In several settings, such methods achieve the accuracy of sophisticated problem-specific methods.

Art der Leistungskontrolle und erlaubte Hilfsmittel

Homework assignments and either (i) an oral examination with no aids («closed book») or (ii) an individual project.

Mindestanforderungen und Beurteilungsmaßstab

Prüfungsstoff

The theory and practice of the techniques covered in the course, as presented in the course.

Literatur


Zuordnung im Vorlesungsverzeichnis

MAMV

Letzte Änderung: So 29.09.2024 13:46