MATHS4UQ Seminar


Das MATH4UQ Forschungsseminar umfasst Vorträge von internen und externen Kolleginnen und Mitarbeiterinnen und Mitarbeiterinnen sowie von Gästen des Lehrstuhls. Jeder Interessierte ist herzlich willkommen.Wenn Sie automatisch über anstehende Forschungsseminare informiert werden möchten, dann können Sie sich in unsere MATH4UQ Seminar Verteilerliste eintragen. Aufzeichnungen diverser zurückliegender Vorträge finden Sie auch auf unserem MATH4UQ YouTube Kanal.

22.11.2022, Dienstag, 15:00 (CET)

  • Vortragende(r)​: : Prof. Raul Tempone, RWTH Aachen University and KAUST
  • Titel:   A simple approach to proving the existence, uniqueness, and strong and weak convergence rates for a broad class of McKean-Vlasov equations
  • Zusammenfassung:  By employing a system of interacting stochastic particles as an approximation of the McKean–Vlasov equation and utilizing classical stochastic analysis tools, namely Itô's formula and Kolmogorov–Chentsov continuity theorem, we prove the existence and uniqueness of strong solutions for a broad class of McKean–Vlasov equations as a limit of the conditional expectation of exchangeable particles. Considering an increasing number of particles in the approximating stochastic particle system, we also prove the Lp strong convergence rate and derive the weak convergence rates using the Kolmogorov backward equation and variations of the stochastic particle system. Our convergence rates were verified by numerical experiments which also indicate that the assumptions made here and in the literature can be relaxed.

29.11.2022, Dienstag, 15:00 (CET)

  • Vortragende(r):   Dr. Truong-Vinh  Hoang, Chair of Mathematics for Uncertainty Quantification at RWTH Aachen University
  • Titel: A likelihood-free nonlinear filtering approach using the machine-learning-based approximation of conditional expectation.
  • Zusammenfassung: We discuss the machine learning-based ensemble conditional mean filter (ML-EnCMF) developed for the nonlinear data assimilation based on the orthogonal projection of the conditional mean. The updated mean of the filter matches that of the posterior. Moreover, we show that the filter's updated covariance coincides with the expected conditional covariance. Implementing the EnCMF requires computing the conditional mean. A likelihood-based estimator is prone to significant errors for small ensemble sizes, causing filter divergence. We develop a systematical methodology for integrating machine learning into the EnCMF using the conditional expectation's orthogonal projection

    property. First, we use a combination of an artificial neural network (ANN) and a linear function, obtained based on the ensemble Kalman filter (EnKF), to approximate the conditional mean, enabling the ML-EnCMF to inherit EnKF's advantages. Secondly, we apply a suitable variance reduction technique to reduce statistical errors when estimating loss function. Lastly, we propose a model selection procedure for element-wisely selecting the applied filter. We demonstrate the ML-EnCMF performance using the Lorenz-63 and Lorenz-96 systems and show that the ML-EnCMF outperforms the EnKF and the likelihood-based EnCMF.


06.12.2022, Dienstag, 15:00 (CET)

  • Vortragende(r):   Prof.  Michael Feischl, TU Wien (Institute for Analysis and Scientific Computing)
  • Titel:  A quasi-Monte Carlo data compression algorithm for machine learning.
  • Zusammenfassung: We present an algorithm to reduce large data sets using so-called digital nets, which are well distributed point sets in the unit cube. The algorithm efficiently scans the data and computes certain data dependent weights. Those weights are used to approximately represent the data, without making any assumptions on the distribution of the data points. Under smoothness assumptions on the model, we then show that this can be used to reduce the computational effort needed in finding good parameters in machine learning problems which aim to minimize standard loss functions. While the principal idea of the approximation might also work with other point sets, the particular structural properties of digital nets can be exploited to make the computation of the necessary weights extremely fast.