Dr. Thomas Mach

wissenschaftlicher Mitarbeiter

Kontakt
Raum:
2.29.2.05
Telefon:
0331 977-230182
...

Office Hours

Please make an appointment per e-mail: mach@uni...

Research interests

  • Numerical linear algebra for large or structured matrices, in particular iterative methods for eigenvalue problems and linear systems
  • Krylov subspace methods
  • Data analysis
  • Inverse and ill-posed problems
  • Adaptive cross approximation

 

Publications @ Potsdam

2023 | Can one hear the depth of the water? | M.A. Freitag, P.Kriz, T. Mach, J. M. NicolausZeitschrift: Proceedings in Applied Mathematics and MechanicsSeiten: e202300122Link zur Publikation

Can one hear the depth of the water?

Autoren: M.A. Freitag, P.Kriz, T. Mach, J. M. Nicolaus (2023)

We discuss discrete-time dynamical systems depending on a parameter μ. Assuming that the system matrix A(μ) is given, but the parameter μ is unknown, we infer the most-likely parameter μm≈μ from an observed trajectory x of the dynamical system. We use parametric eigenpairs (vi(μ),lambdai(μ) of the system matrix A(μ) computed with Newton's method based on a Chebyshev expansion. We then represent x in the eigenvector basis defined by the vi(μ) and compare the decay of the components with predictions based on the lambdai(μ). The resulting estimates for μ are combined using a kernel density estimator to find the most likely value for μm and a corresponding uncertainty quantification.

Zeitschrift:
Proceedings in Applied Mathematics and Mechanics
Seiten:
e202300122

2023 | Tikhonov regularization in general form with Chebfun | A. Alqahtani, T. Mach, L. ReichelZeitschrift: PAMMSeiten: e202200041, 6 pagesBand: 22(1)Link zur Publikation

Tikhonov regularization in general form with Chebfun

Autoren: A. Alqahtani, T. Mach, L. Reichel (2023)

Linear ill-posed problems often are analyzed in function spaces using tools from functional analysis, while their numerical solution typically is computed by first discretizing the problem and then applying tools from finite-dimensional linear algebra. The Chebfun package makes it possible to solve linear ill-posed problems without explicit discretization. This work discusses the use of Tikhonov regularization with a fairly general linear regularization operator within the Chebfun framework.

 

Zeitschrift:
PAMM
Seiten:
e202200041, 6 pages
Band:
22(1)

2023 | Solution of Ill-posed Problems with Chebfun | A. Alqahtani, T. Mach, L. ReichelZeitschrift: Numerical AlgorithmsVerlag: SpringerSeiten: 2341‒2364Band: 92Link zur Publikation , Link zum Preprint

Solution of Ill-posed Problems with Chebfun

Autoren: A. Alqahtani, T. Mach, L. Reichel (2023)

The analysis of linear ill-posed problems often is carried out in function spaces using tools from functional analysis. However, the numerical solution of these problems typically is computed by first discretizing the problem and then applying tools from finite-dimensional linear algebra. The present paper explores the feasibility of applying the Chebfun package to solve ill-posed problems with a regularize-first approach numerically. This allows a user to work with functions instead of vectors and with integral operators instead of matrices. The solution process therefore is much closer to the analysis of ill-posed problems than standard linear algebra-based solution methods. Furthermore, the difficult process of explicitly choosing a suitable discretization is not required.

 

Zeitschrift:
Numerical Algorithms
Verlag:
Springer
Seiten:
2341‒2364
Band:
92

2023 | Solving the Parametric Eigenvalue Problem by Taylor Series and Chebyshev Expansion | T. Mach, M.A. FreitagZeitschrift: arXivBand: 2302.03661Link zum Preprint

Solving the Parametric Eigenvalue Problem by Taylor Series and Chebyshev Expansion

Autoren: T. Mach, M.A. Freitag (2023)

We discuss two approaches to solving the parametric (or stochastic) eigenvalue problem A(μ)λ(μ)=A(μ)v(μ). One of them uses a Taylor expansion and the other a Chebyshev expansion. The parametric eigenvalue problem assumes that the matrix A depends on a parameter μ, where μ might be a random variable. Consequently, the eigenvalues and eigenvectors are also functions of μ. We compute a Taylor approximation of these functions about μ₀ by iteratively computing the Taylor coefficients. The complexity of this approach is O(n³) for all eigenpairs, if the derivatives of A(μ) at μ₀ are given.  The Chebyshev expansion works similarly. We first find an initial approximation iteratively which we then refine with Newton's method.  This second method is more expensive but provides a good approximation over the whole interval of the expansion instead around a single point.
 
We present numerical experiments confirming the complexity and demonstrating that the approaches are capable of tracking eigenvalues at intersection points.  Further experiments shed light on the limitations of the Taylor expansion approach with respect to the distance from the expansion point μ₀.

Zeitschrift:
arXiv
Band:
2302.03661

2023 | Adaptive Cross Approximation for Tikhonov Regularization in General Form | T. Mach, L. Reichel, M. Van BarelZeitschrift: Numerical AlgorithmsVerlag: SpringerSeiten: 815‒830Band: 92Link zur Publikation , Link zum Preprint

Adaptive Cross Approximation for Tikhonov Regularization in General Form

Autoren: T. Mach, L. Reichel, M. Van Barel (2023)

Many problems in Science and Engineering give rise to linear integral equations of the first kind with a smooth kernel. Discretization of the integral operator yields a matrix, whose singular values cluster at the origin. We describe the approximation of such matrices by adaptive cross approximation, which avoids forming the entire matrix. The choice of the number of steps of adaptive cross approximation is discussed. The discretized right-hand side represents data that commonly are contaminated by measurement error. Solution of the linear system of equations so obtained is not meaningful because the matrix determined by adaptive cross approximation is rank-deficient. We remedy this difficulty by using Tikhonov regularization and discuss how a fairly general regularization matrix can be used. Computed examples illustrate that the use of a regularization matrix different from the identity can improve the quality of the computed approximate solutions significantly.

Zeitschrift:
Numerical Algorithms
Verlag:
Springer
Seiten:
815‒830
Band:
92

2022 | Hyperbolic Embedding for Finding Syntax in BERT (short paper) | T. Auyespek, T. Mach, Zh. AssylbekovVerlag: CEUR Workshop ProceedingsBuchtitel: AIxIA 2021 Discussion PapersSeiten: 58‒64Link zur Publikation

Hyperbolic Embedding for Finding Syntax in BERT (short paper)

Autoren: T. Auyespek, T. Mach, Zh. Assylbekov (2022)

Recent advances in natural language processing have improved our understanding of what kind of linguistic knowledge is encoded in modern word representations. For example, methods for testing the ability to extract syntax trees from a language model architecture were developed by Hewitt and Manning (2019)—they project word vectors into Euclidean subspace in such a way that the corresponding squared Euclidean distance approximates the tree distance between words in the syntax tree. This work proposes a method for assessing whether embedding word representations in hyperbolic space can better reflect the graph structure of syntax trees. We show that the tree distance between words in a syntax tree can be approximated well by the hyperbolic distance between corresponding word vectors.

Verlag:
CEUR Workshop Proceedings
Buchtitel:
AIxIA 2021 Discussion Papers
Seiten:
58‒64

2021 | New matrix function approximations and quadratue rules based on the Arnoldi process | N. Eshghi, T. Mach, L. ReichelZeitschrift: Journal of Computational and Applied MathematicsVerlag: ElsevierSeiten: 113442Band: 391Link zur Publikation

New matrix function approximations and quadratue rules based on the Arnoldi process

Autoren: N. Eshghi, T. Mach, L. Reichel (2021)

The Arnoldi process can be applied to inexpensively approximate matrix functions of the form f(A)v and matrix functionals of the form v∗(f(A))∗g(A)v, where A is a large square non-Hermitian matrix, v is a vector, and the superscript ∗ denotes transposition and complex conjugation. Here f and g are analytic functions that are defined in suitable regions in the complex plane. This paper reviews available approximation methods and describes new ones that provide higher accuracy for essentially the same computational effort by exploiting available, but generally not used, moment information.
Numerical experiments show that in some cases the modifications of the Arnoldi decompositions proposed can improve the accuracy of v∗(f(A))∗g(A)v about as much as performing an additional step of the Arnoldi process.

Zeitschrift:
Journal of Computational and Applied Mathematics
Verlag:
Elsevier
Seiten:
113442
Band:
391

A full list of all my publications can be found on my personal webpage.