An inverse problem arises whenever one seeks the cause of observed physical phenomena or observational data, e.g., inferring the governing law from the measurements. This task essentially underlies all scientific discoveries and technological innovations. Thus, the mathematical theory and computational techniques for solving inverse problems are central, e.g., in physics, astronomy, medicine, engineering, and life sciences, and it has evolved into a highly interdisciplinary research area.
Inverse problems are usually ill-posed in the sense that the sought-for solution lacks existence, uniqueness or stability with respect to data perturbation. Since the noise is inherent in the observational data, the numerical algorithms have to employ specialized techniques, commonly known as regularization. The corresponding mathematical framework in the form of regularization theory is highly developed, since the pioneering works of A. Tikhonov in 1960s, H. Engl et al from 1980s and many other researchers. This theory has played a vital role in many research areas, and related numerical algorithms have also been intensively investigated. One versatile framework is to minimize an objective function measuring the quality of fitting between the model output and observational data, possibly plus some additional penalty term, and it covers a large class of powerful iterative inversion techniques.
Due to the unprecedented advances in data acquisition technologies, large datasets are becoming common place for many practical inverse problems. Prominent examples in medical imaging include dynamic, multispectral, multi-energy or multi-frequency data in computed tomography and optical tomography. The ever increasing volume of available data poses enormous computational challenges to image reconstruction, and traditional iterative methods can be too expensive to apply, and currently it represents one of the bottlenecks to extract useful information from the massive dataset. This is especially challenging for problems involving complex physical models, where each data set is very expensive to simulate.
The proposed research aims at addressing the aforementioned outstanding computational challenge using stochastic iterative techniques developed within the machine learning community, and providing relevant theoretical underpinnings. The central idea of stochastic iterative methods is that at each step only a (small) portion of the data set is used to steer the progression of the iterates, instead of the full data set. This allows drastically reducing the computational cost per iteration. This idea has received enormous attention within the machine learning community, and especially has achieved stunning success in deep learning in recent years. Actually stochastic gradient descent and its variants are the workhorse behind many deep learning tasks. A successful completion of this project will greatly advance modern image reconstruction by providing a systematic mathematical and computational framework, including comprehensive theoretical underpinnings, novel algorithms and detailed studies on concrete inverse problems, e.g., in medical imaging.
|