This list contains all my pulications. Whenever possible I've added links to the publication, source code or other supporting material.
The full list can also be downloaded as a BibLaTeX file or as a pdf.
Abstract: Finding optimal data for inpainting is a key problem for image compression with partial differential equations (PDEs). Not only the location of important pixels but also their values should optimise the compression quality. The position of such important data is usually encoded in a binary mask. The corresponding pixel values are real valued and yield prohibitively high storage costs in the context of data compression. Therefore, quantisation strategies for the pixel value domain are mandatory to obtain high compression ratios. While existing methods to quantise the data for PDE-based compression show good quality, unfortunately, they are too slow for many applications. We discuss several strategies, based on data clustering models from machine learning, to speed up the quantisation step. Laurent Hoeltgen, Michael Breuß, “Efficient Co-domain Quantisation for PDE-based Image Compression”, in Proceedings of the Conference Algoritmy 2016, A. Handlovičová, D. Ševčovič (eds.), pp. 194--203, 2016, Publishing House of Slovak University of Technology in Bratislava, available online: http://www.iam.fmph.uniba.sk/amuc/ojs/index.php/algoritmy/article/view/408Abstract: Some recent methods for lossy signal and image compression store only a few selected pixels and fill in the missing structures by inpainting with a partial differential equation (PDE). Suitable operators include the Laplacian, the biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The quality of such approaches depends substantially on the selection of the data that is kept. Optimising this data in the domain and codomain gives rise to challenging mathematical problems that shall be addressed in our work. In the 1D case, we prove results that provide insights into the difficulty of this problem, and we give evidence that a splitting into spatial and tonal (i.e. function value) optimisation does hardly deteriorate the results. In the 2D setting, we present generic algorithms that achieve a high reconstruction quality even if the specified data is very sparse. To optimise the spatial data, we use a probabilistic sparsification, followed by a nonlocal pixel exchange that avoids getting trapped in bad local optima. After this spatial optimisation we perform a tonal optimisation that modifies the function values in order to reduce the global reconstruction error. For homogeneous diffusion inpainting, this comes down to a least squares problem for which we prove that it has a unique solution. We demonstrate that it can be found efficiently with a gradient descent approach that is accelerated with fast explicit diffusion (FED) cycles. Our framework allows to specify the desired density of the inpainting mask a priori. Moreover, is more generic than other data optimisation approaches for the sparse inpainting problem, since it can also be extended to nonlinear inpainting operators such as EED. This is exploited to achieve reconstructions with state-of-the-art quality. Laurent Hoeltgen, Markus Mainberger, Sebastian Hoffmann, Joachim Weickert, Ching Tang, Simon Setzer, Daniel Johannsen, Frank Neumann, Benjamin Doerr, “Optimising Spatial and Tonal Data for PDE-based Inpainting”, in Variational Methods, Bergounioux Maitine, Gabriel Peyré, Christoph Schnörr, Jean-Baptiste Caillau, Thomas Haberkorn (eds.), pp. 35--83, 2016, De Gruyter, 10.1515/9783110430394-002Abstract: Finding optimal data for inpainting is a key problem in the context of partial differential equation-based image compression. We present a new model for optimising the data used for the reconstruction by the underlying homogeneous diffusion process. Our approach is based on an optimal control framework with a strictly convex cost functional containing an $L_1$ term to enforce sparsity of the data and non-convex constraints. We propose a numerical approach that solves a series of convex optimisation problems with linear constraints. Our numerical examples show that it outperforms existing methods with respect to quality and computation time Laurent Hoeltgen, Simon Setzer, Joachim Weickert, “An Optimal Control Approach to Find Sparse Data for Laplace Interpolation”, in Energy Minimization Methods in Computer Vision and Pattern Recognition, Anders Heyden, Frederik Kahl, Carl Olsson, Magnus Oskarsson, Xue-Cheng Tay (eds.), pp. 151--164, 2013, Springer Berlin Heidelberg, 10.1007/978-3-642-40395-8_12Abstract: Finding optimal data for inpainting is a key problem for image-compression with partial differential equations. Not only the location of important pixels but also their values should be optimal to maximise the quality gain. The position of important data is usually encoded in a binary mask. Recent studies have shown that allowing non-binary masks may lead to tremendous speedups but comes at the expense of higher storage costs and yields prohibitive memory requirements for the design of competitive image compression codecs. We show that a recently suggested heuristic to eliminate the additional storage costs of the non-binary mask has a strong theoretical foundation in finite dimension. Binary and non-binary masks are equivalent in the sense that they can both give the same reconstruction error if the binary mask is supplemented with optimal data which does not increase the memory footprint. Further, we suggest two fast numerical schemes to obtain this optimised data. This provides a significant building block in the conception of efficient data compression schemes with partial differential equations. Laurent Hoeltgen, Joachim Weickert, “Why does non-binary mask optimisation work for diffusion-based image compression?”, in Energy Minimization Methods in Computer Vision and Pattern Recognition, X.-C. Tai, E. Bae, T. Chan, S. Leung, M. Lysaker (eds.), pp. 85--98, 2015, Springer International Publishing, 10.1007/978-3-319-14612-6_7Abstract: Partial differential equations are well suited for dealing with image reconstruction tasks such as inpainting. One of the most successful mathematical frameworks for image reconstruction relies on variations of the Laplace equation with different boundary conditions. In this work we analyse these formulations and discuss the existence and uniqueness of solutions of corresponding boundary value problems, as well as their regularity from an analytic point of view. Our work not only sheds light on useful aspects of the well posedness of several standard problem formulations in image reconstruction but also aggregates them in a common framework. In addition, the performed analysis guides us to specify two new formulations of the classic image reconstruction problem that may give rise to new developments in image reconstruction. Laurent Hoeltgen, Isaac Harris, Michael Breuß, Andreas Kleefeld, “Analytic Existence and Uniqueness Results for PDE-Based Image Reconstruction with the Laplacian”, in Lecture Notes in Computer Science, pp. 66--79, 2017, Springer International Publishing, 10.1007/978-3-319-58771-4_6Abstract: The main task in three dimensional shape matching is to retrieve correspondences between two similar three dimensional objects. To this end, a suitable point descriptor which is invariant under isometric transformations is required. A commonly used descriptor class relies on the spectral decomposition of the Laplace-Beltrami operator. Important examples are the heat kernel signature and the more recent wave kernel signature. In previous works, the evaluation of the descriptor is based on eigenfunction expansions. Thereby a significant practical aspect is that computing a complete expansion is very time and memory consuming. Thus additional strategies are usually introduced that enable to employ only part of the full expansion. In this paper we explore an alternative solution strategy. We discretise the underlying partial differential equations (PDEs) not only in space as in the mentioned approaches, but we also tackle temporal parts by using time integration methods. Thus we do not perform eigenfunction expansions and avoid the use of additional strategies and corresponding parameters. We study here the PDEs behind the heat and wave kernel signature, respectively. Our shape matching experiments show that our approach may lead to quality improvements for finding correct correspondences in comparison to the eigenfunction expansion methods. Robert Dachsel, Michael Breuß, Laurent Hoeltgen, “Shape Matching by Time Integration of Partial Differential Equations”, in Lecture Notes in Computer Science, pp. 669--680, 2017, Springer International Publishing, 10.1007/978-3-319-58771-4_53Abstract: Morphological levelings represent a useful tool for the decomposition of an image into cartoon and texture components. Moreover, they can be used to construct a morphological scale space. However, the classic construction of levelings is limited to the use of grey scale images, since an ordering of pixel values is required. In this paper we propose an extension of morphological levelings to colour images. To this end, we consider the formulation of colour images as matrix fields and explore techniques based on the Loewner order for formulating morphological levelings in this setting. Using the matrix-valued colours we study realisations of levelings relying on both the completely discrete construction and the formulation using a partial differential equation. Experimental results confirm the potential of our matrix-based approaches for analysing texture in colour images and for extending the range of applications of levelings in a convenient way to colour image processing. Michael Breuß, Laurent Hoeltgen, Andreas Kleefeld, “Matrix-Valued Levelings for Colour Images”, in Lecture Notes in Computer Science, pp. 296--308, 2017, Springer International Publishing, 10.1007/978-3-319-57240-6_24Abstract: The aim of this paper is to deal with Poisson noise in images arising in electron microscopy. We consider here especially images featuring sharp edges and many relatively large smooth regions together with smaller strongly anisotropic structures. To deal with the denoising task, we propose a variational method combining a data fidelity term that takes into account the Poisson noise model with an anisotropic regulariser in the spirit of anisotropic diffusion. In order to explore the flexibility of the variational approach also an extension using an additional total variation regulariser is studied. The arising optimisation problems can be tackled by efficient recent algorithms. Our experimental results confirm the high quality obtained by our approach. Georg Radow, Michael Breuß, Laurent Hoeltgen, Thomas Fischer, “Optimised Anisotropic Poisson Denoising”, in Image Analysis, pp. 502--514, 2017, Springer International Publishing, 10.1007/978-3-319-59126-1_42Abstract: A major task in non-rigid shape analysis is to retrieve correspondences between two almost isometric 3D objects. An important tool for this task are geometric feature descriptors. Ideally, a feature descriptor should be invariant under isometric transformations and robust to small elastic deformations. A successful class of feature descriptors employs the spectral decomposition of the Laplace-Beltrami operator. Important examples are the heat kernel signature using the heat equation and the more recent wave kernel signature applying the Schrödinger equation from quantum mechanics. In this work we propose a novel feature descriptor which is based on the classic wave equation that describes e.g. sound wave propagation. We explore this new model by discretizing the underlying partial differential equation. Thereby we consider two different time integration methods. By a detailed evaluation at hand of a standard shape data set we demonstrate that our approach may yield significant improvements over state of the art methods for finding correct shape correspondences. Robert Dachsel, Michael Breuß, Laurent Hoeltgen, “The Classic Wave Equation Can Do Shape Correspondence”, in Computer Analysis of Images and Patterns, pp. 264--275, 2017, Springer International Publishing, 10.1007/978-3-319-64689-3_22Abstract: Estimating the shape and appearance of a three dimensional object from flat images is a challenging research topic that is still actively pursued. Among the various techniques available, Photometric Stereo is known to provide very accurate local shape recovery, in terms of surface normals. In this work, we propose to minimise non-convex variational models for Photometric Stereo that recover the depth information directly. We suggest an approach based on a novel optimisation scheme for non-convex cost functions. Experiments show that our strategy achieves more accurate results than competing approaches. Laurent Hoeltgen, Yvain Quéau, Michael Breuß, Georg Radow, “Optimised photometric stereo via non-convex variational minimisation”, in Proceedings of the British Machine Vision Conference 2016, pp. 36.1--36.12, 2016, British Machine Vision Association Press, available online: http://www.bmva.org/bmvc/2016/papers/paper036/index.html, 10.5244/C.30.36Abstract: The main task in three dimensional non-rigid shape correspondence is to retrieve similarities between two or more similar three dimensional objects. A useful way to tackle this problem is to construct a simplified shape representation, called feature descriptor, which is invariant under deformable transformations. A successful class of such feature descriptors is based on physical phenomena, concretely by the heat equation for the heat kernel signature and the Schrodinger equation for ¨ the wave kernel signature. Both approaches employ the spectral decomposition of the Laplace-Beltrami operator, meaning that solutions of the corresponding equations are expressed by a series expansion in terms of eigenfunctions. The feature descriptor is then computed at hand of those solutions. In this paper we explore the influence of the amount of used eigenfunctions on shape correspondence applications, as this is a crucial point with respect to accuracy and overall computational efficiency of the method. Our experimental study will be performed at hand of a standard shape data set. Robert Dachsel, Michael Breuß, Laurent Hoeltgen, “A Study of Spectral Expansion for Shape Correspondence”, in Proceedings of the OAGM Workshop 2018, pp. 73--79, 2018, Verlag der Technischen Universität Graz, 10.3217/978-3-85125-603-1-15Abstract: The Euler-Lagrange framework and splitting based methods are among the most popular approaches to solve variational optic flow problems. These methods are commonly embedded in a coarse-to-fine strategy to be able to handle large displacements. While the use of a denoising filter inbetween the warping is an important tool for splitting based approaches, such a practice is rather uncommon for the Euler-Lagrange method. The question arises, why there is this surprising difference in optic flow methods. In previous works it has also been stated that the use of such a filtering leads to a modification of the underlying energy functional, thus, there seems to be a difference in the energies that are actually minimised depending on the chosen algorithmic approach. The goal of this paper is to address these fundamental issues. By a detailed numerical study we show in which way a filtering affects the evolution of the energy for the above mentioned frameworks. Doing so, we not only give many new insights on the use of filtering steps, we also bridge an important methodical gap between the two commonly used implementation approaches. Laurent Hoeltgen, Simon Setzer, Michael Breuß, “Intermediate Flow Field Filtering in Energy Based Optic Flow Computations”, in Energy Minimization Methods in Computer Vision and Pattern Recognition, Y. Boykov, F. Kahl, V. Lempitsky, F. Schmidt (eds.), pp. 315--328, 2011, Springer Berlin Heidelberg, 10.1007/978-3-642-23094-3_23Abstract: For inpainting with linear partial differential equations (PDEs) such as homogeneous or biharmonic diffusion, sophisticated data optimisation strategies have been found recently. These allow high-quality reconstructions from sparse known data. While they have been explicitly developed with compression in mind, they have not entered actual codecs so far: Storing these optimised data efficiently is a nontrivial task. Since this step is essential for any competetive codec, we propose two new compression frameworks for linear PDEs: Efficient storage of pixel locations obtained from an optimal control approach, and a stochastic strategy for a locally adaptive, tree-based grid. Suprisingly, our experiments show that homogeneous diffusion inpainting can surpass its often favoured biharmonic counterpart in compression. Last but not least, we demonstrate that our linear approach is able to beat both JPEG2000 and the nonlinear state-of-the-art in PDE-based image compression. Pascal Peter, Sebastian Hoffmann, Frank Nedwed, Laurent Hoeltgen, Joachim Weickert, “From Optimised Inpainting with Linear PDEs Towards Competitive Image Compression Codecs”, in Image and Video Technology, T. Bräunl, B. McCane, M. Rivers, X. Yu (eds.), pp. 63--74, 2016, Springer International Publishing Switzerland, 10.1007/978-3-319-29451-3Abstract: Osher and his colleagues introduced Bregman iterations in image processing in 2005. This technique is known to yield excellent results for denoising/deblurring and compressed sensing tasks but it has so far been rarely used for other image processing problems. Some of the assets of the Bregman framework are the high flexibility and the existence of a thorough convergence theory. In this thesis we adapt the split Bregman iteration, originally developed by Goldstein and Osher, to the optical flow problem. The versatility of the Bregman framework allows us to present a general approach to solve variational formulations with modern data terms incorporating higher order constancy assumptions as well as discontinuity preserving smoothness terms such as the popular total variation regulariser. Several models will be analysed, and for each one a detailed algorithm based on the split Bregman iteration will be presented. Finally, we will analyse the theoretical properties of the Bregman iteration. The most interesting questions such as convergence and error estimation will be treated in detail, thus providing a solid mathematical basis for further research. Laurent Hoeltgen, Bregman Iteration for Optical Flow, Master Thesis, 2010, Saarland University, Faculty 6.1 - Mathematics, Saarbrücken, GermanyAbstract: This work analyses several approaches for determining optimal sparse data sets for image reconstructions by means of linear homogeneous diffusion. Two optimisation strategies for finding optimal data locations are presented. The first one impresses through its simplicity and is based on results from spline interpolation theory. However, this approach can only be applied to one dimensional strictly convex and differentiable functions. Due to these restrictions we derive an alternative approach which uses findings from optimal control theory. This new algorithm can be applied on arbitrary signals. Both approaches are analysed for their convergence behaviour. Further, we discuss the problem of selecting good data values for fixed data positions. This problem can be analysed as a least squares problem. An important relationship between the optimal data locations and the data values is derived and we present efficient numerical schemes to obtain these values. Finally, we present a image compression approach based on the findings from this work. Experiments show that is possible to outperform popular compression algorithms. Laurent Hoeltgen, Optimal interpolation data for image reconstructions, PhD Thesis, 2014, Saarland University, Faculty 6.1 - Mathematics, Saarbrücken, GermanyAbstract: Bregman iterations are known to yield excellent results for denoising, deblurring and compressed sensing tasks, but so far this technique has rarely been used for other image processing problems. In this paper we give a thorough description of the Bregman iteration, unifying thereby results of different authors within a common framework. Then we show how to adapt the split Bregman iteration, originally developed by Goldstein and Osher for image restoration purposes, to optical flow which is a fundamental correspondence problem in computer vision. We consider some classic and modern optical flow models and present detailed algorithms that exhibit the benefits of the Bregman iteration. By making use of the results of the Bregman framework, we address the issues of convergence and error estimation for the algorithms. Numerical examples complement the theoretical part. Laurent Hoeltgen, Michael Breuß, “Bregman Iteration for Correspondence Problems: A Study of Optical Flow”, 2015Abstract: Some recent methods for lossy signal and image compression store only a few selected pixels and fill in the missing structures by inpainting with a partial differential equation (PDE). Suitable operators include the Laplacian, the biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The quality of such approaches depends substantially on the selection of the data that is kept. Optimising this data in the domain and codomain gives rise to challenging mathematical problems that shall be addressed in our work. In the 1D case, we prove results that provide insights into the difficulty of this problem, and we give evidence that a splitting into spatial and tonal (i.e. function value) optimisation does hardly deteriorate the results. In the 2D setting, we present generic algorithms that achieve a high reconstruction quality even if the specified data is very sparse. To optimise the spatial data, we use a probabilistic sparsification, followed by a nonlocal pixel exchange that avoids getting trapped in bad local optima. After this spatial optimisation we perform a tonal optimisation that modifies the function values in order to reduce the global reconstruction error. For homogeneous diffusion inpainting, this comes down to a least squares problem for which we prove that it has a unique solution. We demonstrate that it can be found efficiently with a gradient descent approach that is accelerated with fast explicit diffusion (FED) cycles. Our framework allows to specify the desired density of the inpainting mask a priori. Moreover, is more generic than other data optimisation approaches for the sparse inpainting problem, since it can also be extended to nonlinear inpainting operators such as EED. This is exploited to achieve reconstructions with state-of-the-art quality. We also give an extensive literature survey on PDE-based image compression methods. Laurent Hoeltgen, Markus Mainberger, Sebastian Hoffmann, Joachim Weickert, Ching Tang, Simon Setzer, Daniel Johannsen, Frank Neumann, Benjamin Doerr, “Optimising Spatial and Tonal Data for PDE-based Inpainting”, 2015Abstract: Partial differential equations (PDEs) are able to reconstruct images accurately from a small fraction of their image points. The inpainting capabilities of sophisticated anisotropic PDEs allow compression codecs with suboptimal data selection approaches to compete with transform-based methods like JPEG2000. For simple linear PDEs, optimal known data can be found with powerful optimisation strategies. However, the potential of these linear methods for compression has so far not yet been determined. As a remedy, we present a compression framework with homogeneous, biharmonic, and edge-enhancing diffusion (EED) that supports different strategies for data selection and storage: on the one hand, we find exact masks with optimal control or stochastic sparsification and store them with a combination of PAQ and block coding. On the other hand, we propose a new probabilistic strategy for the selection of suboptimal known data that can be efficiently stored with binary trees and entropy coding. This new framework allows us a detailed analysis of the strengths and weaknesses of the three PDEs. Our investigation leads to surprising results: at low compression rates, the simple harmonic diffusion can surpass its more sophisticated PDE-based competitors and even JPEG2000. For high compression rates, we find that EED yields the best result due to its robust inpainting performance under suboptimal conditions. Pascal Peter, Sebastian Hoffmann, Frank Nedwed, Laurent Hoeltgen, Joachim Weickert, “Evaluating the True Potential of Diffusion Based Inpainting in a Compression Context”, 2016, available online: http://www.mia.uni-saarland.de/Publications/peter-pp373.pdfAbstract: Partial differential equations (PDEs) are able to reconstruct images accurately from a small fraction of their image points. The inpainting capabilities of sophisticated anisotropic PDEs allow compression codecs with suboptimal data selection approaches to compete with transform-based methods like JPEG2000. For simple linear PDEs, optimal known data can be found with powerful optimisation strategies. However, the potential of these linear methods for compression has so far not yet been determined. As a remedy, we present a compression framework with homogeneous, biharmonic, and edge-enhancing diffusion (EED) that supports different strategies for data selection and storage: on the one hand, we find exact masks with optimal control or stochastic sparsification and store them with a combination of PAQ and block coding. On the other hand, we propose a new probabilistic strategy for the selection of suboptimal known data that can be efficiently stored with binary trees and entropy coding. This new framework allows us a detailed analysis of the strengths and weaknesses of the three PDEs. Our investigation leads to surprising results: at low compression rates, the simple harmonic diffusion can surpass its more sophisticated PDE-based competitors and even JPEG2000. For high compression rates, we find that EED yields the best result due to its robust inpainting performance under suboptimal conditions. Pascal Peter, Sebastian Hoffmann, Frank Nedwed, Laurent Hoeltgen, Joachim Weickert, “Evaluating the true potential of diffusion-based inpainting in a compression context”, in Signal Processing: Image Communication, 46 pp. 40--53, 2016, 10.1016/j.image.2016.05.002Abstract: We present a strategy for the recovery of a sparse solution of a common problem in acoustic engineering, which is the reconstruction of sound source levels and locations applying microphone array measurements. The considered task bears similarities to the basis pursuit formalism but also relies on additional model assumptions that are challenging from a mathematical point of view. Our approach reformulates the original task as a convex optimisation model. The sought solution shall be a matrix with a certain desired structure. We enforce this structure through additional constraints. By combining popular splitting algorithms and matrix differential theory in a novel framework we obtain a numerically efficient strategy. Besides a thorough theoretical consideration we also provide an experimental setup that certifies the usability of our strategy. Finally, we also address practical issues, such as the handling of inaccuracies in the measurement and corruption of the given data. We provide a post processing step that is capable of yielding an almost perfect solution in such circumstances. Laurent Hoeltgen, Michael Breuß, Gert Herold, Ennes Sarradj, “Sparse $\ell_1$ Regularisation of Matrix Valued Models for Acoustic Source Characterisation”, 2016Abstract: This paper presents a novel approach to distinguish driving styles with respect to their energy efficiency. A distinct property of our method is that it relies exclusively on Global Positioning System (GPS) logs of drivers. This setting is highly relevant in practice as these data can easily be acquired. Relying on positional data alone means that all derived features will be correlated, so we strive to find a single quantity that allows us to perform the driving style analysis. To this end we consider a robust variation of the so called jerk of a movement. We show that our feature choice outperforms other more commonly used jerk-based formulations and we discuss the handling of noisy, inconsistent, and incomplete data as this is a notorious problem when dealing with real-world GPS logs. Our solving strategy relies on an agglomerative hierarchical clustering combined with an L-term heuristic to determine the relevant number of clusters. It can easily be implemented and performs fast, even on very large, real-world data sets. Experiments show that our approach is robust against noise and able to discern different driving styles. Michael Breuß, Laurent Hoeltgen, Ali Boroujerdi, Ashkan Yarahmadi, “Highly Robust Clustering of GPS Driver Data for Energy Efficient Driving Style Modelling”, 2016Abstract: We present a strategy for the recovery of a sparse solution of a common problem in acoustic engineering, which is the reconstruction of sound source levels and locations applying microphone array measurements. The considered task bears similarities to the basis pursuit formalism but also relies on additional model assumptions that are challenging from a mathematical point of view. Our approach reformulates the original task as a convex optimisation model. The sought solution shall be a matrix with a certain desired structure. We enforce this structure through additional constraints. By combining popular splitting algorithms and matrix differential theory in a novel framework we obtain a numerically efficient strategy. Besides a thorough theoretical consideration we also provide an experimental setup that certifies the usability of our strategy. Finally, we also address practical issues, such as the handling of inaccuracies in the measurement and corruption of the given data. We provide a post processing step that is capable of yielding an almost perfect solution in such circumstances. Laurent Hoeltgen, Michael Breuß, Gert Herold, Ennes Sarradj, “Sparse $\ell_1$ Regularisation of Matrix Valued Models for Acoustic Source Characterisation”, in Optimization and Engineering, 19 pp. 39--70, 2017, 10.1007/s11081-017-9357-2Abstract: Optimal known pixel data for inpainting in compression codecs based on partial differential equations is real-valued and thereby expensive to store. Thus, quantisation is required for efficient encoding. In this paper, we interpret the quantisation step as a clustering problem. Due to the global impact of each known pixel and correlations between spatial and tonal data, we investigate the central question, which kind of feature vectors should be used for clustering with popular strategies such as k-means. Our findings show that the number of colours can be reduced significantly without impacting the reconstruction quality. Surprisingly, these benefits are negated by an increased coding cost in compression applications. Laurent Hoeltgen, Pascal Peter, Michael Breuß, “Clustering-Based Quantisation for PDE-Based Image Compression”, 2017Abstract: Optimal known pixel data for inpainting in compression codecs based on partial differential equations is real-valued and thereby expensive to store. Thus, quantisation is required for efficient encoding. In this paper, we interpret the quantisation step as a clustering problem. Due to the global impact of each known pixel and correlations between spatial and tonal data, we investigate the central question, which kind of feature vectors should be used for clustering with popular strategies such as k-means. Our findings show that the number of colours can be reduced significantly without impacting the reconstruction quality. Surprisingly, these benefits are negated by an increased coding cost in compression applications. Laurent Hoeltgen, Pascal Peter, Michael Breuß, “Clustering-based quantisation for PDE-based image compression”, in Signal, Image and Video Processing, 12 pp. 411--419, 2018, 10.1007/s11760-017-1173-9Abstract: Laplace interpolation is a popular approach in image inpainting using partial differential equations. The classic approach considers the Laplace equation with mixed boundary conditions. Recently a more general formulation has been proposed, where the differential operator consists of a point-wise convex combination of the Laplacian and the known image data. We provide the first detailed analysis on existence and uniqueness of solutions for the arising mixed boundary value problem. Our approach considers the corresponding weak formulation and aims at using the Theorem of Lax-Milgram to assert the existence of a solution. To this end we have to resort to weighted Sobolev spaces. Our analysis shows that solutions do not exist unconditionally. The weights need some regularity and must fulfil certain growth conditions. The results from this work complement findings which were previously only available for a discrete setup. Laurent Hoeltgen, Andreas Kleefeld, Isaac Harris, Michael Breuß, “Theoretical Foundation of the Weighted Laplace Inpainting Problem”, 2018Abstract: Estimating shape and appearance of a three-dimensional object from a given set of images is a classic research topic that is still actively pursued. Among the various techniques available, photometric stereo is distinguished by the assumption that the underlying input images are taken from the same point of view but under different lighting conditions. The most common techniques are conceptually close to the classic photometric stereo problem, meaning that the modelling encompasses a linearisation step and that the shape information is computed in terms of surface normals. In this work, instead of linearising we aim to stick to the original formulation of the photometric stereo problem, and we propose to minimise a much more natural objective function, namely the reprojection error in terms of depth. Minimising the resulting non-trivial variational model for photometric stereo allows to recover the depth of the photographed scene directly. As a solving strategy, we follow an approach based on a recently published optimisation scheme for non-convex and non-smooth cost functions. The main contributions of our paper are of theoretical nature. A technical novelty in our framework is the usage of matrix differential calculus. We supplement our approach by a detailed convergence analysis of the resulting optimisation algorithm and discuss possibilities to ease the computational complexity. At hand of an experimental evaluation we discuss important properties of the method. Overall, our strategy achieves more accurate results than other approaches that rely on the classic photometric stereo assumptions. The experiments also highlight some practical aspects of the underlying optimisation algorithm that may be of interest in a more general context. Georg Radow, Laurent Hoeltgen, Yvain Quéau, Michael Breuß, “Optimisation of photometric stereo methods by non-convex variational minimisation”, 2017Abstract: Partial differential equations have recently been used for image compression purposes. One of the most successful frameworks solves the Laplace equation using a weighting scheme to determine the importance of individual pixels. We provide a physical interpretation of this approach in terms of the Helmholtz equation which explains its superiority. For better reconstruction quality, we subsequently formulate an optimisation task for the corresponding finite difference discretisation to maximise the influence of the physical traits of the Helmholtz equation. Our findings show that sharper contrasts and lower errors in the reconstruction are possible. Laurent Hoeltgen, “Understanding image inpainting with the help of the Helmholtz equation”, in Mathematical Sciences, 11 pp. 73--77, 2017, 10.1007/s40096-017-0207-3Abstract: Estimating shape and appearance of a three-dimensional object from a given set of images is a classic research topic that is still actively pursued. Among the various techniques available, photometric stereo is distinguished by the assumption that the underlying input images are taken from the same point of view but under different lighting conditions. The most common techniques are conceptually close to the classic photometric stereo problem, meaning that the modelling encompasses a linearisation step and that the shape information is computed in terms of surface normals. In this work, instead of linearising we aim to stick to the original formulation of the photometric stereo problem, and we propose to minimise a much more natural objective function, namely the reprojection error in terms of depth. Minimising the resulting non-trivial variational model for photometric stereo allows to recover the depth of the photographed scene directly. As a solving strategy, we follow an approach based on a recently published optimisation scheme for non-convex and non-smooth cost functions. The main contributions of our paper are of theoretical nature. A technical novelty in our framework is the usage of matrix differential calculus. We supplement our approach by a detailed convergence analysis of the resulting optimisation algorithm and discuss possibilities to ease the computational complexity. At hand of an experimental evaluation we discuss important properties of the method. Overall, our strategy achieves more accurate results than other approaches that rely on the classic photometric stereo assumptions. The experiments also highlight some practical aspects of the underlying optimisation algorithm that may be of interest in a more general context. Georg Radow, Laurent Hoeltgen, Yvain Quéau, Michael Breuß, “Optimisation of Photometric Stereo Methods by Non-convex Variational Minimisation”, in Journal of Mathematical Imaging and Vision, 61 pp. 84--105, 2019, 10.1007/s10851-018-0828-7Abstract: Laplace interpolation is a popular approach in image inpainting using partial differential equations. The classic approach considers the Laplace equation with mixed boundary conditions. Recently a more general formulation has been proposed, where the differential operator consists of a point-wise convex combination of the Laplacian and the known image data. We provide the first detailed analysis on existence and uniqueness of solutions for the arising mixed boundary value problem. Our approach considers the corresponding weak formulation and aims at using the Theorem of Lax-Milgram to assert the existence of a solution. To this end we have to resort to weighted Sobolev spaces. Our analysis shows that solutions do not exist unconditionally. The weights need some regularity and must fulfil certain growth conditions. The results from this work complement findings which were previously only available for a discrete setup. Laurent Hoeltgen, Andreas Kleefeld, Isaac Harris, Michael Breuß, “Theoretical Foundation of the Weighted Laplace Inpainting Problem”, in Applications of Mathematics, 64 pp. 281--300, 2019, 10.21136/AM.2019.0206-18Abstract: Lossy image compression methods based on partial differential equations have received much attention in recent years. They may yield high quality results but rely on the computationally expensive task of finding optimal data. For the possible extension to video compression, the data selection is a crucial issue. In this context one could either analyse the video sequence as a whole or perform a frame-by-frame optimisation strategy. Both approaches are prohibitive in terms of memory and run time. In this work we propose to restrict the expensive computation of optimal data to a single frame and to approximate the optimal reconstruction data for the remaining frames by prolongating it by means of an optic flow field. We achieve a notable decrease in the computational complexity. As a proof-of-concept, we evaluate the proposed approach for multiple sequences with different characteristics. We show that the method preserves a reasonable quality in the reconstruction, and is very robust against errors in the flow field. Laurent Hoeltgen, Michael Breuß, Georg Radow, “Towards PDE-Based Video Compression with Optimal Masks and Optic Flow”, in Lecture Notes in Computer Science, pp. 79--91, 2019, Springer International Publishing, 10.1007/978-3-030-22368-7_7Abstract: Es werden die grundlegenden Definitionen und Eigenschaften von endlichen Körpern präsentiert. Es wird bewiesen, dass es endliche Körper mit $p$ Elementen gibt wenn $p$ eine Primzahl ist. Daraus schließen wir dann, dass wenn ein Körper endlich ist, dieser $p^n$ Elemente haben muss, wobei $p$ wieder eine Primzahl und $n$ eine natürliche Zahl ist. Anschließend wird gezeigt wie man alle Unterkörper bestimmt. Zum Schluss wird die dargestellte Theorie an einigen Beispielen anschaulich gemacht. Laurent Hoeltgen, Konstruktion und Struktur endlicher Körper, Bachelor's Thesis, 2008, University of LuxembourgAbstract: Lossy image compression methods based on partial differential equations have received much attention in recent years. They may yield high-quality results but rely on the computationally expensive task of finding an optimal selection of data. For the possible extension to video compression, this data selection is a crucial issue. In this context, one could either analyse the video sequence as a whole or perform a frame-by-frame optimisation strategy. Both approaches are prohibitive in terms of memory and run time. In this work, we propose to restrict the expensive computation of optimal data to a single frame and to approximate the optimal reconstruction data for the remaining frames by prolongating it by means of an optic flow field. In this way, we achieve a notable decrease in the computational complexity. As a proof-of-concept, we evaluate the proposed approach for multiple sequences with different characteristics. In doing this, we discuss in detail the influence of possible computational setups. We show that the approach preserves a reasonable quality in the reconstruction and is very robust against errors in the flow field. Michael Breuß, Laurent Hoeltgen, Georg Radow, “Towards PDE-Based Video Compression with Optimal Masks Prolongated by Optic Flow”, in Journal of Mathematical Imaging and Vision, 63 pp. 144--156, 2020, 10.1007/s10851-020-00973-6