The emerging era of exascale computing that will be ushered in by the forthcoming generation of supercomputers will provide both opportunities and challenges. The raw compute power of such high performance computing (HPC) hardware has the potential to revolutionize many areas of science and industry. However, novel computing algorithms and software must be developed to ensure the potential of novel HPC architectures is realized.
Computational imaging, where the goal is to recover images of interest from raw data acquired by some observational instrument, is one of the most widely encountered class of problem in science and industry, with myriad applications across astronomy, medicine, planetary and climate science, computer graphics and virtual reality, geophysics, molecular biology, and beyond.
The rise of exascale computing, coupled with recent advances in instrumentation, is leading to novel and often huge datasets that, in principle, could be imaged for the first time in an interpretable manner at high fidelity. However, to unlock interpretable, high-fidelity imaging of big-data novel methodological approaches, algorithms and software implementations are required -- we will develop precisely these components as part of the Learned EXascale Computational Imaging (LEXCI) project.
Firstly, whereas traditional computational imaging algorithms are based on relatively simple hand-crafted prior models of images, in LEXCI we will learn appropriate image priors and physical instrument simulation models from data, leading to much more accurate representations. Our hybrid techniques will be guided by model-based approaches to ensure effectiveness, efficiency, generalizability and uncertainty quantification. Secondly, we will develop novel algorithmic structures that support highly parallelized and distributed implementations, for deployment across a wide range of modern HPC architectures. Thirdly, we will implement these algorithms in professional research software. The structure of our algorithms will not only allow computations to be distributed across multi-node architectures, but memory and storage requirements also. We will develop a tiered parallelization approach targeting both large-scale distributed-memory parallelization, for distributing work across processors and co-processors, and light-weight data parallelism through vectorization or light-weight threads, for distributing work on processors and co-processors. Our tiered parallelization approach will ensure the software can be used across the full range of modern HPC systems. Combined, these developments will provide a future computing paradigm to help usher in the era of exascale computational imaging.
The resulting computational imaging framework will have widespread application and will be applied to a number of diverse problems as part of the project, including radio interferometric imaging, magnetic resonance imaging, seismic imaging, computer graphics, and beyond. The resulting software will be deployed on the latest HPC computing resources to evaluate their performance and to feed back to the community the computing lessons learned and techniques developed, so as to support the general advance of exascale computing.
|