MATHS4UQ Seminar
Das MATH4UQ Forschungsseminar umfasst Vorträge von internen und externen Kolleginnen und Mitarbeiterinnen und Mitarbeiterinnen sowie von Gästen des Lehrstuhls. Jeder Interessierte ist herzlich willkommen.Wenn Sie automatisch über anstehende Forschungsseminare informiert werden möchten, dann können Sie sich in unsere MATH4UQ Seminar Verteilerliste eintragen. Aufzeichnungen diverser zurückliegender Vorträge finden Sie auch auf unserem MATH4UQ YouTube Kanal.
21.11.2023, Dienstag, 12:00 (CET)
 Vortragende(r): Prof. David Pardo, University of the Basque Country
 Titel: Multimodal Variational Autoencoder for Inverse Problems in Geophysics.
 Zusammenfassung:
Estimating subsurface properties from geophysical measurements is a common challenge in inverse problems. In numerous geophysical applications, there is no unique solution; instead, there are multiple plausible solutions. In this context, some Bayesian methods are designed to solve geophysical inverse problems and their associated uncertainties.
In this presentation, based on [1], we describe a novel approach to solve geophysical inverse problems with uncertainty quantification (UQ): a multimodal variational autoencoder (MVAE) model that employs a mixture of truncated Gaussian densities to provide multiple solutions to the inverse problem. These solutions contain information about their probability of occurrence and their UQ.
The MVAE model consists of two main components: an encoder and a decoder. The encoder employs a neural network to generate a mixture of truncated Gaussian densities, representing the distribution to the inverse problem solution. Conversely, the decoder computes the numerical solution to the forward problem based on geophysical principles.
MVAE allows for a more comprehensive exploration of potential solutions to geophysical inverse problems, accommodating their inherent UQ and providing a more affluent understanding of the solutions space.
The objective of the talk is (a) to describe the need for UQ in geophysical exploration methods, (b) introduce neural networks as a family of methods that introduces new opportunities toward achieving UQ, (c) explain MVAE as an example of a method that provides UQ, and more importantly, (d) open a collaboration space to develop new practical UQ methods in geophysics based on neural networks.
References:
[1] Rodriguez, O., Taylor, J. M., & Pardo, D. (2023). Multimodal variational autoencoder for inverse problems in geophysics: application to a 1D Magnetotelluric problem. Geophysical Journal International, 235(3), 25982613.
28.11.2023, Dienstag, 12:00 (CET)
 Vortragende(r): Prof. Sebastian Reich, University of Potsdam
 Titel: An application of the EnKF to optimal control.
 Zusammenfassung:
Stochastic optimal control problems are typically phrased in terms of the
associated HamiltonJacobiBellman equation. Solving such partial differential equations remains
challenging. In this talk, an alternative approach involving forward and backward meanfield
evolution equations will be considered. In its simplest incarnation, these equations become
equivalent to the formulations used in score generative modelling. General optimal control problems
lead to more complex forwardbackward meanfield equations which require further approximations for which
we will employ variants of the ensemble Kalman filter.
05.12.2023, Dienstag, 13:30 (CET)
 Vortragende(r): Prof. Björn Sprungk, TU Bergakademie Freiberg
 Titel: Metropolisadjusted interacting particle sampling.
 Zusammenfassung:
In recent years, several interacting particle samplers have been proposed in order to sample approximately from a complicated target distributions such as posterior measures occuring in Bayesian inverse problems. These interacting particle samplers use an ensemble of interacting particles moving in the product state space according to coupled stochastic differential equations. In practice, we have to apply numerical time stepping to simulate these systems, such as the EulerMaruyama scheme. However, the time discretization affects the invariance of the particle system with respect to the target distribution and, thus, introduces a bias. In order to correct for this we study the application of a Metropolisation step similar to the Metropolisadjusted Langevin algorithm. We discuss ensemble and particlewise Metropolization and state the basic convergence of the resulting ensemble Markov chain to the product target distribution. We show the benefits of this correction in numerical examples for common interacting particle samplers such as the affine invariant interacting Langevin dynamics (ALDI), consensusbased sampling, and stochastic Stein variational gradient descent.
13.02.2024, Dienstag, 12:00 (CET)
 Vortragende(r): Dr. Robert Kunsch, RWTH Aachen
 Titel: Monte Carlo quadrature with optimal confidence
 Zusammenfassung:
We study the numerical integration of smooth functions using finitely many function evaluations within randomized algorithms, aiming for the smallest possible error guarantees with high probability (the socalled confidence level). There are different strategies for constructing robust estimators, for example, taking the median of several repeated realizations of a basic method where the number of required repetitions depends on the desired confidence level. For Sobolev classes of continuous functions, however, we can find linear integration methods that have optimal error bounds for all confidence levels. Numerical experiments show that the tails of the error distribution are significantly smaller compared to previously known methods.

Short Bio:
From 2009 until 2017 I studied mathematics at the Friedrich Schiller University Jena. My doctorate studies from 2014 until 2017 have been supervised by Erich Novak who introduced me to Monte Carlo methods in the framework of informationbased complexity, the dissertation was titled "Highdimensional function approximation: Breaking the curse with Monte Carlo methods". After two years as a PostDoc at Osnabrück University I came to RWTH Aachen in 2019, being a member of the Chair of "Mathematics of Information Processing" since then. In summer 2023 I spend a semester at TU Chemnitz as a substitute professor teaching courses on Monte Carlo methods and inverse problems. My research interest lies in the search for optimal (randomized) methods in different algorithmic settings where we only have access to partial information about problem instances, for example, finitely many function values in approximation or integration.
14.11.2023, Dienstag, 12:00 (CET)
 Vortragende(r): Xinzhu Liang, University of Manchester
 Titel: A randomized multiindex Monte Carlo method
 Zusammenfassung:
We consider the problem of estimating expectations with respect to a target distribution with an unknown normalizing constant, and where even the unnormalized target needs to be approximated at finite resolution. Under such an assumption, we extend a recently introduced multiindex Sequential Monte Carlo (SMC) ratio estimator, which provably enjoys the complexity improvements of multiindex Monte Carlo (MIMC) and the efficiency of SMC for inference. The present work leverages a randomization strategy to remove bias entirely, which simplifies estimation substantially, particularly in the MIMC context, where the choice of index set is otherwise important. With theoretical results, the proposed method provably achieves the same canonical complexity of MSE1 under appropriate assumptions as the original method, but without discretization bias. It is illustrated on examples of Bayesian inverse and spatial statistics problems.
07.11.2023, Dienstag, 12:00 (CET)
 Vortragende(r): Prof. Jonas Latz, University of Manchester
 Titel: Subsampling in Continuous Time: from Optimisation to Sampling.
 Zusammenfassung:
The Stochastic Gradient Langevin Dynamics (SGLD) are popularly used to approximate Bayesian posterior distributions in statistical learning procedures with largescale data. As opposed to many usual Markov chain Monte Carlo (MCMC) algorithms, SGLD is not stationary with respect to the posterior distribution; two sources of error appear: The first error is introduced by an EulerMaruyama discretisation of a Langevin diffusion process, the second error comes from the data subsampling that enables its use in largescale data settings. In this work, we consider an idealised version of SGLD to analyse the method's pure subsampling error that we then see as a bestcase error for diffusionbased subsampling MCMC methods. Based on a recent continuoustime formulation of the Stochastic Gradient Descent algorithm, we introduce and study the Stochastic Gradient Langevin Diffusion (SGLDiff). This dynamical system is a continuoustime Markov process that follows the Langevin diffusion corresponding to a data subset and switches this data subset after exponential waiting times. There, we show that the Wasserstein distance between the posterior and the limiting distribution of SGLDiff is bounded above by a fractional power of the mean waiting time. Importantly, this fractional power does not depend on the dimension of the state space. We bring our results into context with other analyses of SGLD.
24.10.2023, Dienstag, 12:00 (CEST)
 Vortragende(r): Sophia Wiechert, RWTH Aachen University
 Titel: Generic Importance Sampling via Optimal Control for Stochastic Reaction Networks.
 Zusammenfassung:
Stochastic reaction networks (SRNs) are a class of continuoustime discretestate Markov processes describing the random interaction of d species through reactions. SRNs are commonly used for modeling diverse phenomena such as (bio)chemical reactions, epidemics, risk theory, queuing, and supply chain networks. In this context, we propose two alternative importance sampling (IS) approaches to improve the efficiency of Monte Carlo (MC) estimators for various statistical quantities. Our special interest lays in estimating rare event probabilities in high dimensional SRNs, in which d>>1. The challenge in IS is to choose an appropriate change of probability measure to achieve a substantial variance reduction. We propose an automated approach to obtain a highly efficient pathdependent measure change based on an original connection between finding optimal IS parameters and solving a variance minimization problem via a stochastic optimal control formulation. We pursue two alternative approaches to mitigate the curse of dimensionality when solving the resulting backward HamiltonJacobiBellman (HJB) equation. In the first approach [1], we propose a learningbased method to approximate the value function using an ansatz function (e.g. a neural network), where the parameters are determined via a stochastic optimization algorithm. As an alternative, we present in [2] a dimension reduction method, based on mapping the problem to a significantly lower dimensional space via the Markovian projection (MP) idea. The output of this model reduction technique is a much lower dimensional SRN that preserves at all time steps the marginal distribution of the original highdimensional problem. By solving the HJB equation for a projected lower dimensional process, we get projected IS parameters, which can be mapped back to the original $d$ dimensional SRN. Our analysis and numerical experiments verify that both proposed IS (learning based and MPHJB) approaches substantially reduce the MC estimator’s variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators.
[1] Ben Hammouda, C., Ben Rached, N., Tempone, R., & Wiechert, S. (2023). Learningbased importance sampling via stochastic optimal control for stochastic reaction networks. Statistics and Computing, 33(3), 58.
[2] Hammouda, C. B., Rached, N. B., Tempone, R., & Wiechert, S. (2023). Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projectionbased Approach. arXiv preprint arXiv:2306.02660.
17.10.2023, Dienstag, 12:00 (CEST)
 Vortragende(r): Arved Bartuska, RWTH Aachen University
 Titel: Efficient nested integration estimators for expected information gain.
 Zusammenfassung:
Nested integration problems consist of one outer and one inner integral separated by a nonlinear function. Estimating these integrals poses significant computational challenges. We employ the randomized quasiMonte Carlo method to estimate both the outer and inner integrals, yielding a doubleloop quasiMonte Carlo estimator. Furthermore, we derive asymptotic error bounds to obtain the optimal number of outer and inner samples, depending on a specified error tolerance. This estimator is then applied to find the expected information gain (EIG) of an experiment, allowing for efficient data collection.
Mathematical models of experiments often contain nuisance uncertainty, i.e., uncertainty about parameters that are not of interest to the experimenter directly. If these uncertainties are to be quantified rigorously, a second inner integral is added to the nested structure of the EIG. We utilize the Laplace approximation to derive two novel and efficient estimators that consider nuisance uncertainty and demonstrate the effectiveness of these estimators via three numerical examples.
10.10.2023, Dienstag, 12:00 (CEST)
 Vortragende(r): Prof. Ahmed Kebaier, University of EvryUniversity ParisSaclay
 Titel: The interpolated drift implicit Euler scheme Multilevel Monte Carlo method for pricing Barrier options and applications to the CIR and CEV models.
 Zusammenfassung:
Recently, Giles et al. [14] proved that the efficiency of the Multilevel Monte Carlo (MLMC) method for evaluating DownandOut barrier options for a diffusion process (Xt) t∈[0,T] with globally Lipschitz coefficients, can be improved by combining a Brownian bridge technique and a conditional Monte Carlo method provided that the running minimum inf t∈[0,T] Xt has a bounded density in the vicinity of the barrier.
In the present work, thanks to the Lamperti transformation technique and using a Brownian interpolation of the drift implicit Euler scheme of Alfonsi [2], we show that the efficiency of the MLMC can be also improved for the evaluation of barrier options for models with nonLipschitz diffusion coefficients under certain moment constraints. We study two example models: the CoxIngersollRoss (CIR) and the Constant of Elasticity of Variance (CEV) processes for which we show that the conditions of our theoretical framework are satisfied under certain restrictions on the models parameters. In particular, we develop semiexplicit formulas for the densities of the running minimum and running maximum of both CIR and CEV processes which are of independent interest. Finally, numerical tests are processed to illustrate our results.[14] M. B. Giles, K. Debrabant, and A. Rössler. 'Analysis of multilevel Monte Carlo path simulation using the Milstein discretisation.' Discrete Contin. Dyn. Syst. Ser. B, 24(8):3881–3903, 2019.
[2] A. Alfonsi. 'Strong order one convergence of a drift implicit euler scheme: Application to the cir process. Statistics & Probability Letters, 83(2):602–607, 2013.
09.05.2023, Dienstag, 15:00 (CEST)
 Vortragende(r): Prof. Michael Herty, IGPM, RWTH Aachen University
 Titel: UQ for hyperbolic problems
 Zusammenfassung:
We are interested in quantifying uncertainties that appear in nonlinear hyperbolic partial differential equations arising in a variety of applications from fluid flow to traffic modeling. A common approach to treat the stochastic components of the solution is by using generalized polynomial chaos expansions. This method was successfully applied in particular for general elliptic and parabolic PDEs as well as linear hyperbolic stochastic equations. More recently, gPC methods have been successfully applied to particular hyperbolic PDEs using the explicit form of nonlinearity or the particularity of the studied system structure as, e.g., in the psystem. While such models arise in many applications, e.g., in atmospheric flows, fluid flows under uncertain gas compositions and shallow water flows, a general gPC theory with corresponding numerical methods are still at large. Typical analytical and numerical challenges that appear for the gPC expanded systems are loss of hyperbolicity and positivity of solutions (like gas density or water depth). Any of those effects might trigger severe instabilities within classical finitevolume or discontinuous Galerkin methods. We will discuss properties and conditions to guarantee stability and present numerical results on selected examples.
02.05.2023, Dienstag, 15:00 (CEST)
 Vortragende(r): Dr. Burcu Aydogan, RWTH Aachen University
 Titel: Optimal investment strategies under the relative performance in jumpdiffusion markets
 Zusammenfassung:
We work on a portfolio management problem for a representative agent and a group of people, forming a market under relative performance concerns in a continuoustime setting. Herein, we define two wealth dynamics: the agent’s and the market’s wealth. The wealth dynamics appear in jumpdiffusion markets. In our setting, we measure the performances of the market and the individual agent with preferences linked to the market performance. Therefore, we have two classical Merton problems to determine what the market does and the agent’s optimal strategy relative to the market performance. Furthermore, our framework assumes that the agent’s utility performance does not affect the market, while the market affects the agent’s utility. We explore the optimal investment strategies for both the agent and the market.
This is a joint work with Mogens Steffensen in the University of Copenhagen.
18.04.2023, Dienstag, 15:00 (CEST)
 Vortragende(r): Prof. Ian Hugh Sloan, University of New South Wales (UNSW Australia)
 Titel: High dimensional approximation – avoiding the curse of dimensionality, doubling the proven convergence rate.
 Zusammenfassung:
High dimensional approximation problems commonly arise from parametric PDE problems in which the parametric input depends on very many independent univariate random variables. Typically (as in the method of “generalized polynomial chaos”, or GPC) the dependence on these variables is modelled by multivariate polynomials, leading to exponentially increasing difficulty and cost (expressed as the “curse of dimensionality”) as the dimension increases. For this reason sparsity of coefficients is a major focus in implementations of GPC.
In this lecture we develop a different approach to one version of GPC. In this method there is no need for sparsification, and no curse of dimensionality. The method, proposed in a 2022 paper with Frances Kuo, Vesa Kaarnioja, Yoshihito Kazashi and Fabio Nobile, uses kernel interpolation with periodic kernels, with the kernels located at lattice points, as advocated long ago by Hickernell and colleagues.
The lattice points and the kernels depend on parameters called “weights”. In the 2022 paper the recommended weights were “SPOD” weights, leading to a cost growing as the square of the number of lattice points. A newer 2023 paper with Kuo and Kaarnioja introduced “serendipitous” weights, for which the cost grows only linearly with both dimension and number of lattice points, allowing practical computations in as many as 1,000 dimensions.
The rate of convergence proved in the above papers was of the order $n^{\alpha/2}$, for interpolation using the reproducing kernel of a space with mixed smoothness of order $\alpha$. A new result with Frances Kuo doubles the proven convergence rate to $n^{\alpha}$.
11.04.2023, Dienstag, 15:00 (CEST)
 Vortragende(r): Yang Liu, King Abdullah University of Science and Technology
 Titel: Goaloriented adaptive finite element multilevel Monte Carlo with convergence rates
 Zusammenfassung:
We propose our Adaptive Multilevel Monte Carlo (AMLMC) [Beck, Joakim, et al. "Goaloriented adaptive finite element multilevel Monte Carlo with convergence rates." Computer Methods in Applied Mechanics and Engineering (2022)] method to solve an elliptic partial differential equation (PDE) with lognormal random input data, where the PDE model is subject to geometryinduced singularity.
The previous work [Moon, KS., et al. "Convergence rates for an adaptive dual weighted residual finite element algorithm." BIT Numerical Mathematics 46.2 (2006)] developed convergence rates for a goaloriented adaptive algorithm based on isoparametric dlinear quadrilateral finite element approximations and the dual weighted residual error representation in the deterministic setting. This algorithm refines the mesh based on the error contribution to the QoI.
This work aims to combine MLMC and the adaptive finite element solver. Contrary to the standard Multilevel Monte Carlo methods, where each sample is computed using a discretizationbased numerical method, whose resolution is linked to the level, our AMLMC algorithm uses a sequence of tolerances as the levels. Specifically, for a given realization of the input coefficient and a given accuracy level, the AMLMC constructs its approximate sample as the ones using the first mesh in the sequence of deterministic, nonuniform meshes generated by the abovementioned adaptive algorithm that satisfies the sampledependent bias constraint.
28.03.2023, Dienstag, 15:00 (CEST)
 Vortragende(r): Dr. André Carlon, KAUST
 Titel: Bayesian quasiNewton method for stochastic optimization
 Zusammenfassung:
Stochastic optimization problems arise in many fields, like data sciences, reliability engineering, and finance. The stochastic gradient descent (SGD) method is a cheap approach to solving such problems, relying on noisy gradient estimates to converge to local optima. However, in the case of μconvex, Lsmooth problems, the convergence is deeply affected by the condition number of the problem, L/μ. Here, we propose a Bayesian approach to find a suitable matrix to precondition gradient estimates in stochastic optimization that reduces the effect of large conditioning numbers. We show that maximizing the posterior distribution to find a suitable preconditioning matrix is a constrained deterministic strongly convex problem that can be solved efficiently using the NewtonCG method with a pathfollowing approach. Numerical results on stochastic problems with large condition numbers show that our Bayesian quasiNewton preconditioner improves the convergence of SGD.
21.03.2023, Dienstag, 15:00 (CET)
 Vortragende(r): Prof. Antonis Papapantoleon, the Delft Institute of Applied Mathematics, Institute of Applied and Computational Mathematics, FORTH
 Titel: A splitting deep Ritz method for multiasset option pricing in Lévy models
 Zusammenfassung:
Solving highdimensional differential equations is still a challenging field for researchers. In recent years, many works have been presented that provide approximation by training neural networks using loss functions based on the differential operator of the equation at hand, as well as its initial/terminal and boundary conditions. In this work, we use a machine learning approach for pricing European (basket) options written with respect to a set of correlated underlyings whose dynamics undertake random jumps. We approximate the solution of the corresponding partial integrodifferential equation using a variant of the deep Ritz method that splits the differential operator into symmetric and asymmetric parts. The method is driven by a modified version of the neural network introduced in the deep Galerkin method. The structure of the proposed neural network ensures the asymptotic behavior of the solution for large values of the underlyings. Moreover, it leads the outputs of the network to be consistent with the prior known qualitative properties of the solution. We present results on the Merton jumpdiffusion model.
14.03.2023, Dienstag, 15:00 (CET)
 Vortragende(r): Prof. Dr. Markus Bachmayr, RWTH Aachen University
 Titel: Optimality of adaptive stochastic Galerkin methods for affineparametric elliptic PDEs
 Zusammenfassung:
We consider the computational complexity of approximating elliptic PDEs with random coefficients by sparse product polynomial expansions. Except for special cases (for instance, when the spatial discretisation limits the achievable overall convergence rate), previous approaches for a posteriori selection of polynomial terms and corresponding spatial discretizations do not guarantee optimal complexity in the sense of computational costs scaling linearly in the number of degrees of freedom. We show that one can achieve optimality of an adaptive Galerkin scheme for discretizations by spline wavelets in the spatial variable when a multiscale representation of the affinely parameterized random coefficients is used. This is joint work with Igor Voulis.
M. Bachmayr and I. Voulis, An adaptive stochastic Galerkin method based on multilevel expansions of random fields: Convergence and optimality, ESAIM:M2AN 56(6), pp. 19551992, 2022. Preprint: arXiv:2109:09136.
07.03.2023, Dienstag, 15:00 (CET)
 Vortragende(r): Prof. dr. ir. C.W. (Kees) Oosterlee, Utrecht University
 Titel:On the application of machine learning to enhance algorithms in computational finance
 Zusammenfassung: In this presentation, we will first give a brief overview of our experiences with the use of artificial neural networks (ANNs) in finance. We'll give an example of supervised, unsupervised and reinforcement learning.
After this we will outline the use of neural networks for the calibration of a financial asset price model in the context of financial option pricing. To provide an efficient calibration framework, a datadriven approach is proposed to learn the solutions of financial models and to reduce the corresponding computation time significantly.
Specifically, fitting model parameters is formulated as training hidden neurons within a machinelearning framework.
The rapid online computation of ANNs combined with a flexible optimization method (i.e. Differential Evolution) provides us fast calibration without getting stuck in local minima.
21.02.2023, Dienstag, 15:00 (CET)
 Vortragende(r): Shyam Mohan Subbiah Pillai, RWTH Aachen University
 Titel:Importance sampling for McKeanVlasov stochastic differential equation
 Zusammenfassung: We are interested in Monte Carlo (MC) methods for estimating probabilities of rare events associated with solutions to the McKeanVlasov stochastic differential equation (MVSDE). MVSDEs arise in the meanfield limit of stochastic interacting particle systems, which have many applications in pedestrian dynamics, collective animal behaviour and financial mathematics. Importance sampling (IS) is used to reduce high relative variance in MC estimators of rare event probabilities. Optimal change of measure is methodically derived from variance minimisation, yielding a highdimensional partial differential control equation which is cumbersome to solve. This problem is circumvented by using a decoupling approach, resulting in a lower dimensional control PDE. The decoupling approach necessitates the use of a double Loop Monte Carlo (DLMC) estimator. We further combine IS with a novel multilevel DLMC estimator which not only reduces complexity from O(TOL4) to O(TOL3) but also drastically reduces associated constant, enabling computationally feasible estimation of rare event probabilities.
14.02.2023, Dienstag, 15:00 (CET)
 Vortragende(r): Christian Bayer, Weierstrass Institute for Applied Analysis and Stochastics
 Titel: Motivated by the challenges related to the calibration of financial models, we consider the problem of solving numerically a singular McKeanVlasov equation
 Zusammenfassung:
$$
d S_t= \sigma(t,S_t) S_t \frac{\sqrt v_t}{\sqrt {\mathbb{E}[v_tS_t]}}dW_t,
$$
where $W$ is a Brownian motion and $v$ is an adapted diffusion process. This equation can be considered as a singular local stochastic volatility model.
Whilst such models are quite popular among practitioners, unfortunately, its wellposedness has not been fully understood yet and, in general, is possibly not guaranteed at all.
We develop a novel regularization approach based on the reproducing kernel Hilbert space (RKHS) technique and show that the regularized model is wellposed. Furthermore, we prove propagation of chaos. We demonstrate numerically that a thus regularized model is able to perfectly replicate option prices due to typical local volatility models. Our results are also applicable to more general McKeanVlasov equations.(Joint work with Denis Belomestny, Oleg Butkovsky, and John Schoenmakers.)
07.02.2023, Dienstag, 15:00 (CET)
 Vortragende(r): Prof. PerChristian Hansen, Technical University of Denmark
 Titel: EdgePreserving Computed Tomography (CT) with Uncertain View Angles.
 Zusammenfassung: In computed tomography, data consist of measurements of the attenuation of Xrays passing through an object. The goal is to reconstruct an image of the linear attenuation coefficient of the object's interior. For each position of the Xray source, characterized by its angle with respect to a fixed coordinate system, one measures a set of data referred to as a view. A common assumption is that these view angles are known, but in some applications they are known with imprecision.
We present a Bayesian inference approach to solving the joint inverse problem for the image and the view angles, while also providing uncertainty estimates. For the image, we impose a Laplace difference prior enabling the representation of sharp edges in the image; this prior has connections to total variation regularization. For the view angles, we use a von Mises prior which is a 2πperiodic continuous probability distribution.
Numerical results show that our algorithm can jointly identify the image and the view angles, while also providing uncertainty estimates of both. We demonstrate our method with simulations of a 2D Xray computed tomography problems using fan beam configurations.This is joint work with N. A. B. Riis and Y. Dong, Technical University of Denmark; F. Uribe, LUT University, Finland; and J. M. Bardsley, University of Montana.
06.12.2022, Dienstag, 15:00 (CET)
 Vortragende(r): Prof. Michael Feischl, TU Wien (Institute for Analysis and Scientific Computing)
 Titel: A quasiMonte Carlo data compression algorithm for machine learning.
 Zusammenfassung: We present an algorithm to reduce large data sets using socalled digital nets, which are well distributed point sets in the unit cube. The algorithm efficiently scans the data and computes certain data dependent weights. Those weights are used to approximately represent the data, without making any assumptions on the distribution of the data points. Under smoothness assumptions on the model, we then show that this can be used to reduce the computational effort needed in finding good parameters in machine learning problems which aim to minimize standard loss functions. While the principal idea of the approximation might also work with other point sets, the particular structural properties of digital nets can be exploited to make the computation of the necessary weights extremely fast.
29.11.2022, Dienstag, 15:00 (CET)
 Vortragende(r): Dr. TruongVinh Hoang, Chair of Mathematics for Uncertainty Quantification at RWTH Aachen University
 Titel: A likelihoodfree nonlinear filtering approach using the machinelearningbased approximation of conditional expectation.

Zusammenfassung: We discuss the machine learningbased ensemble conditional mean filter (MLEnCMF) developed for the nonlinear data assimilation based on the orthogonal projection of the conditional mean. The updated mean of the filter matches that of the posterior. Moreover, we show that the filter's updated covariance coincides with the expected conditional covariance. Implementing the EnCMF requires computing the conditional mean. A likelihoodbased estimator is prone to significant errors for small ensemble sizes, causing filter divergence. We develop a systematical methodology for integrating machine learning into the EnCMF using the conditional expectation's orthogonal projection
property. First, we use a combination of an artificial neural network (ANN) and a linear function, obtained based on the ensemble Kalman filter (EnKF), to approximate the conditional mean, enabling the MLEnCMF to inherit EnKF's advantages. Secondly, we apply a suitable variance reduction technique to reduce statistical errors when estimating loss function. Lastly, we propose a model selection procedure for elementwisely selecting the applied filter. We demonstrate the MLEnCMF performance using the Lorenz63 and Lorenz96 systems and show that the MLEnCMF outperforms the EnKF and the likelihoodbased EnCMF.
22.11.2022, Dienstag, 15:00 (CET)
 Vortragende(r): : Prof. Raul Tempone, RWTH Aachen University and KAUST
 Titel: A simple approach to proving the existence, uniqueness, and strong and weak convergence rates for a broad class of McKeanVlasov equations
 Zusammenfassung: By employing a system of interacting stochastic particles as an approximation of the McKean–Vlasov equation and utilizing classical stochastic analysis tools, namely Itô's formula and Kolmogorov–Chentsov continuity theorem, we prove the existence and uniqueness of strong solutions for a broad class of McKean–Vlasov equations as a limit of the conditional expectation of exchangeable particles. Considering an increasing number of particles in the approximating stochastic particle system, we also prove the Lp strong convergence rate and derive the weak convergence rates using the Kolmogorov backward equation and variations of the stochastic particle system. Our convergence rates were verified by numerical experiments which also indicate that the assumptions made here and in the literature can be relaxed.