250085 VO Tensor methods for data science and scientific computing (2022W)
Labels
ON-SITE
Registration/Deregistration
Note: The time of your registration within the registration period has no effect on the allocation of places (no first come, first served).
Details
max. 25 participants
Language: English
Examination dates
- Friday 27.01.2023
- Wednesday 01.02.2023
- Monday 06.02.2023
- Wednesday 08.03.2023
- Thursday 09.03.2023
- Monday 17.04.2023
Lecturers
Classes (iCal) - next class is marked with N
- Tuesday 04.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 05.10. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 11.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 12.10. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 18.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 19.10. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 25.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 08.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 09.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 15.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 16.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 22.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 23.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 29.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 30.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 06.12. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 07.12. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 13.12. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 14.12. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 10.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 11.01. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 17.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 18.01. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 24.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Wednesday 25.01. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
- Tuesday 31.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Information
Aims, contents and method of the course
Assessment and permitted materials
Oral examination with no aids («closed book»). Bonus points may be awarded for active participation and for work on optional projects and assignments.
Minimum requirements and assessment criteria
Examination topics
The theory and practice of the techniques covered in the course, as presented in the course.
Reading list
Association in the course directory
MAMV
Last modified: Mo 17.04.2023 11:49
* low-rank approximation and analysis of abstract data represented by multi-dimensional arrays
and
* adaptive numerical methods for solving PDE problems.
For the mentioned two areas, however seemingly disjoint, the idea of exactly representing or approximating «data» in a suitable low-dimensional subspace of a large (possibly infinite-dimensional) space is equally natural. The notions of matrix rank and of low-rank matrix approximation, presented in basic courses of linear algebra, are central to one of many possible expressions of this idea.In psychometrics, signal processing, image processing and (vaguely defined) data mining, low-rank tensor decompositions have been studied as a way of formally generalizing the notion of rank from matrices to higher-dimensional arrays (tensors). Several such generalizations have been proposed, including the canonical polyadic (CP) and Tucker decompositions and the tensor-SVD, with the primary motivation of analyzing, interpreting and compressing datasets. In this context, data are often thought of as parametrizations of images, video, social networks or collections of interconnected texts; on the other hand, data representing functions (which often occur in computational mathematics) are remarkable for the possibility of precise analysis.The tensor-train (TT) and the more general hierarchical Tucker decompositions were developed in the community of numerical mathematics, more recently and with particular attention to PDE problems. In fact, exactly the same and very similar representations had long been used for the numerical simulation of many-body quantum systems by computational chemists and physicists under the names of «matrix-product states» (MPS) and «multilayer multi-configuration time-dependent Hartree». These low-rank tensor decompositions are based on subspace approximation, which can be performed adaptively and iteratively, in a multilevel fashion. In a broader context of PDE problems, this leads to numerical methods that are formally based on generic discretizations but effectively operate on adaptive, data-driven discretizations constructed «online», in the course of computation. In several settings, such methods achieve the accuracy of sophisticated problem-specific methods.***The goal of the course is to introduce students to the foundations of modern low-rank tensor methods.
The course is to provide students with ample opportunity for starting own research.