Papers by Christoph Brune

PLOS ONE
Circulating tumor cells (CTCs) isolated from blood can be probed for the expression of treatment ... more Circulating tumor cells (CTCs) isolated from blood can be probed for the expression of treatment targets. Immunofluorescence is often used for both the enumeration of CTC and the determination of protein expression levels related to treatment targets. Accurate and reproducible assessment of such treatment target expression levels is essential for their use in the clinic. To enable this, an open source image analysis program named ACCEPT was developed in the EU-FP7 CTCTrap and CANCER-ID programs. Here its application is shown on a retrospective cohort of 132 metastatic breast cancer patients from which blood samples were processed by CellSearch ® and stained for HER-2 expression as additional marker. Images were digitally stored and reviewers identified a total of 4084 CTCs. CTC's HER-2 expression was determined in the thumbnail images by ACCEPT. 150 of these images were selected and sent to six independent investigators to score the HER-2 expression with and without ACCEPT. Concordance rate of the operators' scoring results for HER-2 on CTCs was 30% and could be increased using the ACCEPT tool to 51%. Automated assessment of HER-2 expression by ACCEPT on 4084 CTCs of 132 patients showed 8 (6.1%) patients with all CTCs expressing HER-2, 14 (10.6%) patients with no CTC expressing HER-2 and 110 (83.3%) patients with CTCs showing a varying HER-2 expression level. In total 1576 CTCs were determined HER-2 positive. We conclude that the use of image analysis enables a more reproducible quantification of treatment targets on CTCs and leads the way to fully automated and reproducible approaches.

Physics in Medicine & Biology
Photoacoustic tomography is a hybrid imaging technique that combines high optical tissue contrast... more Photoacoustic tomography is a hybrid imaging technique that combines high optical tissue contrast with high ultrasound resolution. Direct reconstruction methods such as filtered backprojection, time reversal and least squares suffer from curved line artefacts and blurring, especially in case of limited angles or strong noise. In recent years, there has been great interest in regularised iterative methods. These methods employ prior knowledge on the image to provide higher quality reconstructions. However, easy comparisons between regularisers and their properties are limited, since many tomography implementations heavily rely on the specific regulariser chosen. To overcome this bottleneck, we present a modular reconstruction framework for photoacoustic tomography. It enables easy comparisons between regularisers with different properties, e.g. nonlinear, higher-order or directional. We solve the underlying minimisation problem with an efficient first-order primal-dual algorithm. Convergence rates are optimised by choosing an operator dependent preconditioning strategy. Our reconstruction methods are tested on challenging 2D synthetic and experimental data sets. They outperform direct reconstruction approaches for strong noise levels and limited angle measurements, offering immediate benefits in terms of acquisition time and quality. This work provides a basic platform for the investigation of future advanced regularisation methods in photoacoustic tomography.

Brain perfusion is of key importance to assess brain function. Modern CT scanners can acquire per... more Brain perfusion is of key importance to assess brain function. Modern CT scanners can acquire perfusion maps of the cerebral parenchyma in vivo at submillimeter resolution. These perfusion maps give insights into the hemodynamics of the cerebral parenchyma and are critical for example for treatment decisions in acute stroke. However, the relations between acquisition parameters, tissue attenuation curves, and perfusion values are still poorly understood and cannot be unraveled by studies involving humans because of ethical concerns. We present a 4D CT digital phantom specific for an individual human brain to analyze these relations in a bottom-up fashion. Validation of the signal and noise components was based on 1,000 phantom simulations of 20 patient imaging data. This framework was applied to quantitatively assess the relation between radiation dose and perfusion values, and to quantify the signal-to-noise ratios of penumbra regions with decreasing sizes in white and gray matter. This is the first 4D CT digital phantom that enables to address clinical questions without having to expose the patient to additional radiation dose.
We propose a novel method to detect and correct drift in non-raster scanning probe microscopy. In... more We propose a novel method to detect and correct drift in non-raster scanning probe microscopy. In conventional raster scanning drift is usually corrected by subtracting a fitted polynomial from each scan line, but sample tilt or large topographic features can result in severe artifacts. Our method uses selfintersecting scan paths to distinguish drift from topographic features. Observing the height differences when passing the same position at different times enables the reconstruction of a continuous function of drift. We show that a small number of self-intersections is adequate for automatic and reliable drift correction. Additionally, we introduce a fitness function which provides a quantitative measure of drift correctability for any arbitrary scan shape.

Nanotechnology, 2013
Scanning probe microscopy (SPM) has facilitated many scientific discoveries utilizing its strengt... more Scanning probe microscopy (SPM) has facilitated many scientific discoveries utilizing its strengths of spatial resolution, non-destructive characterization and realistic in situ environments. However, accurate spatial data are required for quantitative applications but this is challenging for SPM especially when imaging at higher frame rates. We present a new operation mode for scanning probe microscopy that uses advanced image processing techniques to render accurate images based on position sensor data. This technique, which we call sensor inpainting, frees the scanner to no longer be at a specific location at a given time. This drastically reduces the engineering effort of position control and enables the use of scan waveforms that are better suited for the high inertia nanopositioners of SPM. While in raster scanning, typically only trace or retrace images are used for display, in Archimedean spiral scans 100% of the data can be displayed and at least a two-fold increase in temporal or spatial resolution is achieved. In the new mode, the grid size of the final generated image is an independent variable. Inpainting to a few times more pixels than the samples creates images that more accurately represent the ground truth.
We propose a novel method to detect and correct drift in non-raster scanning probe microscopy. In... more We propose a novel method to detect and correct drift in non-raster scanning probe microscopy. In conventional raster scanning drift is usually corrected by subtracting a fitted polynomial from each scan line, but sample tilt or large topographic features can result in severe artifacts. Our method uses selfintersecting scan paths to distinguish drift from topographic features. Observing the height differences when passing the same position at different times enables the reconstruction of a continuous function of drift. We show that a small number of self-intersections is adequate for automatic and reliable drift correction. Additionally, we introduce a fitness function which provides a quantitative measure of drift correctability for any arbitrary scan shape.

ABSTRACT Diese Dissertation gehört zu den Gebieten mathematische Bildverarbeitung und inverse Pro... more ABSTRACT Diese Dissertation gehört zu den Gebieten mathematische Bildverarbeitung und inverse Probleme. Ein inverses Problem ist die Aufgabe, Modellparameter anhand von gemessenen Daten zu berechnen. Solche Probleme treten in zahlreichen Anwendungen in Wissenschaft und Technik auf, z.B. in medizinischer Bildgebung, Biophysik oder Astronomie. Wir betrachten Rekonstruktionsprobleme mit Poisson Rauschen in der Tomographie und optischen Nanoskopie. Bei letzterer gilt es Bilder ausgehend von verzerrten und verrauschten Messungen zu rekonstruieren, wohingegen in der Positronen-Emissions-Tomographie die Aufgabe in der Visualisierung physiologischer Prozesse eines Patienten besteht. Standardmethoden zur 3D Bildrekonstruktion berücksichtigen keine zeitabhängigen Informationen oder Dynamik, z.B. Herzschlag oder Atmung in der Tomographie oder Zellmigration in der Mikroskopie. Diese Dissertation behandelt Modelle, Analyse und effiziente Algorithmen für 3D und 4D zeitabhängige inverse Probleme. This thesis contributes to the field of mathematical image processing and inverse problems. An inverse problem is a task, where the values of some model parameters must be computed from observed data. Such problems arise in a wide variety of applications in sciences and engineering, such as medical imaging, biophysics or astronomy. We mainly consider reconstruction problems with Poisson noise in tomography and optical nanoscopy. In the latter case, the task is to reconstruct images from blurred and noisy measurements, whereas in positron emission tomography the task is to visualize physiological processes of a patient. In 3D static image reconstruction standard methods do not incorporate time-dependent information or dynamics, e.g. heart beat or breathing in tomography or cell motion in microscopy. This thesis is a treatise on models, analysis and efficient algorithms to solve 3D and 4D time-dependent inverse problems.

SIAM Journal on Imaging Sciences, 2009
In this paper, we propose a new optimization approach for the simultaneous computation of optical... more In this paper, we propose a new optimization approach for the simultaneous computation of optical flow and edge detection therein. Instead of using an Ambrosio-Tortorelli type energy functional, we reformulate the optical flow problem as a multidimensional control problem. The optimal control problem is solved by discretization methods and large-scale optimization techniques. The edge detector can be immediately built from the control variables. We provide three series of numerical examples. The first shows that the mere presence of a gradient restriction has a regularizing effect, while the second demonstrates how to balance the regularizing effects of a term within the objective and the control restriction. The third series of numerical results is concerned with the direct evaluation of a TV-regularization term by introduction of control variables with sign restrictions.

Given a graph where vertices represent alternatives and arcs represent pairwise comparison data, ... more Given a graph where vertices represent alternatives and arcs represent pairwise comparison data, the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential function agrees with the pairwise comparisons. Our goal in this paper is to develop a method for collecting data for which the least squares estimator for the ranking problem has maximal Fisher information. Our approach, based on experimental design, is to view data collection as a bi-level optimization problem where the inner problem is the ranking problem and the outer problem is to identify data which maximizes the informativeness of the ranking. Under certain assumptions, the data collection problem decouples, reducing to a problem of finding multigraphs with large algebraic connectivity. This reduction of the data collection problem to graph-theoretic questions is one of the primary contributions of this work. As an application, we study the Yahoo! Movie user rating data set and demonstrate that the addition of a small number of well-chosen pairwise comparisons can significantly increase the Fisher informativeness of the ranking. As another application, we study the 2011-12 NCAA football schedule and propose schedules with the same number of games which are significantly more informative. Using spectral clustering methods to identify highly-connected communities within the division, we argue that the NCAA could improve its notoriously poor rankings by simply scheduling more out-of-conference games.

Lecture Notes in Computer Science, 2009
This paper deals with denoising of density images with bad Poisson statistics (low count rates), ... more This paper deals with denoising of density images with bad Poisson statistics (low count rates), where the reconstruction of the major structures seems the only reasonable task. Obtaining the structures with sharp edges can also be a prerequisite for further processing, e.g. segmentation of objects. A variety of approaches exists in the case of Gaussian noise, but only a few in the Poisson case. We propose some total variation (TV) based regularization techniques adapted to the case of Poisson data, which we derive from approximations of logarithmic a-posteriori probabilities. In order to guarantee sharp edges we avoid the smoothing of the total variation and use a dual approach for the numerical solution. We illustrate and test the feasibility of our approaches for data in positron emission tomography, namely reconstructions of cardiac structures with 18 F-FDG and H2 15 O tracers, respectively.

2011 IEEE Nuclear Science Symposium Conference Record, 2011
We propose a method for reconstructing data from short time positron emission tomography (PET) sc... more We propose a method for reconstructing data from short time positron emission tomography (PET) scans, i.e data acquired over a short time period. In this case standard reconstruction methods deliver only unsatisfactory and noisy results. We incorporate a priori information directly in the reconstruction process via nonlinear variational methods. A promising approach was the so-called EMTV algorithm, where the negative loglikelihood functional, which is minimized in the expectation maximization (ML-EM) algorithm, was modified by adding a total variation (TV) term. To improve the results and to overcome the issue of the loss of contrast we extend the algorithm by an inverse scale space method using Bregman distances, to which we refer as BREGMAN EMTV algorithm. The methods are tested on short time (5 and 30 seconds) FDG measurements of the thorax. We can show that the EMTV approach can effectively reduce the noise, but still introduces an oversmoothing, which is eliminated by the BREGMAN EMTV method, obtaining a reconstruction of comparable quality to the corresponding long time (20 and 7 minutes) scan. This correction for the loss of contrast is necessary to obtain quantitative PET images.

Lecture Notes in Computer Science, 2009
Measurements in nanoscopic imaging suffer from blurring effects concerning different point spread... more Measurements in nanoscopic imaging suffer from blurring effects concerning different point spread functions (PSF). Some apparatus even have PSFs that are locally dependent on phase shifts. Additionally, raw data are affected by Poisson noise resulting from laser sampling and "photon counts" in fluorescence microscopy. In these applications standard reconstruction methods (EM, filtered backprojection) deliver unsatisfactory and noisy results. Starting from a statistical modeling in terms of a MAP likelihood estimation we combine the iterative EM algorithm with TV regularization techniques to make an efficient use of a-priori information. Typically, TV-based methods deliver reconstructed cartoon-images suffering from contrast reduction. We propose an extension to EM-TV, based on Bregman iterations and inverse scale space methods, in order to obtain improved imaging results by simultaneous contrast enhancement. We illustrate our techniques by synthetic and experimental biological data. Ω k(x − y)u(y)dy , X.-C.
2008 IEEE Nuclear Science Symposium Conference Record, 2008
PET measurements of tracers with a lower dose rate or short radioactive half life suffer from ext... more PET measurements of tracers with a lower dose rate or short radioactive half life suffer from extremely low SNRs. In these cases standard reconstruction methods (OSEM, EM, filtered backprojection) deliver unsatisfactory and noisy results. Here, we propose to introduce nonlinear variational methods into the reconstruction process to make an efficient use of a-priori information and to attain improved imaging results. We illustrate our technique by evaluating cardiac H2 15 O measurements. The general approach can also be used for other specific goals allowing to incorporate a-priori information about the solution with Poisson distributed data.

Lecture Notes in Mathematics, 2013
ABSTRACT We address the task of reconstructing images corrupted by Poisson noise, which is import... more ABSTRACT We address the task of reconstructing images corrupted by Poisson noise, which is important in various applications such as fluorescence microscopy (Dey et al., 3D microscopy deconvolution using Richardson-Lucy algorithm with total variation regularization, 2004), positron emission tomography (PET; Vardi et al., J Am Stat Assoc 80:8–20, 1985), or astronomical imaging (Lantéri and Theys, EURASIP J Appl Signal Processing 15:2500–2513, 2005). Here we focus on reconstruction strategies combining the expectation-maximization (EM) algorithm and total variation (TV) based regularization, and present a detailed analysis as well as numerical results. Recently extensions of the well known EM/Richardson-Lucy algorithm received increasing attention for inverse problems with Poisson data (Dey et al., 3D microscopy deconvolution using Richardson-Lucy algorithm with total variation regularization, 2004; Jonsson et al., Total variation regularization in positron emission tomography, 1998; Panin et al., IEEE Trans Nucl Sci 46(6):2202–2210, 1999). However, most of these algorithms for regularizations like TV lead to convergence problems for large regularization parameters, cannot guarantee positivity, and rely on additional approximations (like smoothed TV). The goal of this lecture is to provide accurate, robust and fast EM-TV based methods for computing cartoon reconstructions facilitating post-segmentation and providing a basis for quantification techniques. We illustrate also the performance of the proposed algorithms and confirm the analytical concepts by 2D and 3D synthetic and real-world results in optical nanoscopy and PET.

Nanotechnology, 2013
Scanning probe microscopy (SPM) has facilitated many scientific discoveries utilizing its strengt... more Scanning probe microscopy (SPM) has facilitated many scientific discoveries utilizing its strengths of spatial resolution, non-destructive characterization and realistic in situ environments. However, accurate spatial data are required for quantitative applications but this is challenging for SPM especially when imaging at higher frame rates. We present a new operation mode for scanning probe microscopy that uses advanced image processing techniques to render accurate images based on position sensor data. This technique, which we call sensor inpainting, frees the scanner to no longer be at a specific location at a given time. This drastically reduces the engineering effort of position control and enables the use of scan waveforms that are better suited for the high inertia nanopositioners of SPM. While in raster scanning, typically only trace or retrace images are used for display, in Archimedean spiral scans 100% of the data can be displayed and at least a two-fold increase in temporal or spatial resolution is achieved. In the new mode, the grid size of the final generated image is an independent variable. Inpainting to a few times more pixels than the samples creates images that more accurately represent the ground truth.

Journal of Scientific Computing, 2013
ABSTRACT In this work we analyze and compare two recent variational models for image denoising an... more ABSTRACT In this work we analyze and compare two recent variational models for image denoising and improve their reconstructions by applying a Bregman iteration strategy. One of the standard techniques in image denoising, the ROF-model (cf. Rudin et al. in Physica D 60:259–268, 1992 ), is well known for recovering sharp edges of a signal or image, but also for producing staircase-like artifacts. In order to overcome these model-dependent deficiencies, total variation modifications that incorporate higher-order derivatives have been proposed (cf. Chambolle and Lions in Numer. Math. 76:167–188, 1997 ; Bredies et al. in SIAM J. Imaging Sci. 3(3):492–526, 2010 ). These models reduce staircasing for reasonable parameter choices. However, the combination of derivatives of different order leads to other undesired side effects, which we shall also highlight in several examples. The goal of this paper is to analyze capabilities and limitations of the different models and to improve their reconstructions in quality by introducing Bregman iterations. Besides general modeling and analysis we discuss efficient numerical realizations of Bregman iterations and modified versions thereof.
Uploads
Papers by Christoph Brune