Papers by Philippe Bekaert
We present a new Monte Carlo method for solving the global illumination problem in environments w... more We present a new Monte Carlo method for solving the global illumination problem in environments with general geometry descriptions and light emission and scattering properties. Current Monte Carlo global illumination algorithms are based on generic density estimation techniques that do not take into account any knowledge about the nature of the data points-light and potential particle hit points-from which a global illumination solution is to be reconstructed. We propose a novel estimator, especially designed for solving linear integral equations such as the rendering equation. The resulting single-pass global illumination algorithm promises to combine the flexibility and robustness of bi-directional path tracing with the efficiency of algorithms such as photon mapping.
Proceedings of the 9th International Conference on Computer Vision Theory and Applications, 2014
In this paper, we present a method to accomplish free viewpoint video for soccer scenes. This wil... more In this paper, we present a method to accomplish free viewpoint video for soccer scenes. This will allow the rendering of a virtual camera, such as a virtual rail camera, or a camera moving around a frozen scene. We use 7 static cameras in a wide baseline setup (10 meters apart from each other). After debayering and segmentation, a crude depth map is created using a plane sweep approach. Next, this depth map is filtered and used in a second, depth-selective plane sweep by creating validity maps per depth. The complete method employs NVIDIA CUDA and traditional GPU shaders, resulting in a fast and scalable solution. The results, using real images, show the effective removal of artifacts, yielding high quality images for a virtual camera.

Proceedings of the International Conference on Computer Vision Theory and Applications, 2011
Navigation in large virtual reality applications is often done by unnatural input devices like ke... more Navigation in large virtual reality applications is often done by unnatural input devices like keyboard, mouse, gamepad and similar devices. A more natural approach would be letting the user walk through the virtual world as if it was a physical place. This involves tracking the position and orientation of the participant over a large area. We propose a pure optical tracking system that only uses off-the-shelf components like cameras and LED ropes. The construction of the scene doesn't require any off-line calibration or difficult positioning, which makes it easy to build and indefinitely scalable in both size and users. The proposed algorithms have been implemented and tested in a virtual and a room-sized lab set-up. The first results from our tracker are promising and can compete with many (expensive) commercial trackers.
In this course, we describe the fundamentals of light transport and techniques for computing the ... more In this course, we describe the fundamentals of light transport and techniques for computing the global distribution of light in a scene. The main focus will be on the light transport simulation since the quality and efficiency of a photo-realistic renderer is determined by its global illumination algorithm. We explain the basic principles and fundamentals behind algorithms such as stochastic ray tracing, path tracing, light tracing and stochastic radiosity.
We present a fully functional prototype to convincingly restore eye contact between two video cha... more We present a fully functional prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, to achieve real-time performance up to 30 frames per second for 800×600 resolution images. Furthermore, an optimal set of finetuned parameters are presented, that optimizes the end-to-end performance of the application, and therefore is still able to achieve high subjective visual quality.
Hasselt Univ, Expertise Ctr Digital Media, Diepenbeek, Belgium. De Decker, B, Hasselt Univ, Exper... more Hasselt Univ, Expertise Ctr Digital Media, Diepenbeek, Belgium. De Decker, B, Hasselt Univ, Expertise Ctr Digital Media, Wetenschapspark 2, Diepenbeek, Belgium.
One of the main problems in the radiosity method is how to discretise the surfaces of a scene int... more One of the main problems in the radiosity method is how to discretise the surfaces of a scene into mesh elements that allow us to accurately represent illumination. In this paper we present a robust information-theoretic refinement criterion (oracle) based on kernel smoothness for hierarchical radiosity. This oracle improves on previous ones in that at equal cost it gives a better discretisation, approaching the optimal one from an information theory point of view, and also needs less visibility computations for a similar image quality.

he objective of this study was to determine the adaptation of some azolla (Azolla anabaena) genot... more he objective of this study was to determine the adaptation of some azolla (Azolla anabaena) genotypes to the Mediterranean and Aegean coastline ecological conditions. The research was carried out in the research fields of Faculty of Agriculture, Aegean University in Menemen-İzmir during 2002-2003 and 2003-2004. In the trial, five genotypes belong to four Azolla species were used. Fresh azolla plants were inoculated at a rate of 300 g m-2 into ponds for each month, they were harvested 14 days after inoculation and dry matter (g m-2) was determined. The results of the study showed that the genotypes FI-1040 and MI-4030 could be adaptable to the Mediterranean and Aegean coastline ecological conditions. Their dry weights were 67.8 and 68.2 g m-2 after fourteen days, respectively. The annual dry matter production reached to 16 ton ha-. ÖZET u çalışmada bazı azolla (Azolla anabaena L.) genotiplerinin Akdeniz-Ege kıyı kuşağına adaptasyonunun belirlenmesi amaçlanmıştır. Araştırma 2002-2003 ve 2003-2004 yıllarında Ege Üniversitesi Ziraat Fakültesi'nin Menemen Uygulama Çiftliğinde yürütülmştür. Denemede, dört Azolla türüne ait beş genotip kullanılmıştır. Metrekareye 300 g taze azola aşılanmış ve 14 gün sonra hasat edilmiştir. Genotiplerden FI-1040 ve MI-4030'un Ege-Akdeniz kıyı şeridi ekolojik koşullarına uyum sağladığı anlaşılmıştır. Her iki genotipin 14 gün sonra kuru madde ağırlığının sırasıyla 67.8 g m-2 ve 68.2 g m-2 olduğu, yıllık kuru madde üretiminin ise ortalama 16 ton ha-'a ulaştığı anlaşılmıştır.
Accelerating Path Tracing by Re-Using Paths
... Accelerating Path Tracing by Re-Using Paths Philippe Bekaert Max-Planck-Institut für Informat... more ... Accelerating Path Tracing by Re-Using Paths Philippe Bekaert Max-Planck-Institut für Informatik, Saarbrücken, Germany. [email protected] Mateu Sbert Institut d'Informà tica i Aplicacions, Universitat de Girona, Spain. [email protected] ...
In this paper, we provide examples to optimize signal processing or visual computing algorithms w... more In this paper, we provide examples to optimize signal processing or visual computing algorithms written for SIMT-based GPU architectures. These implementations demonstrate the optimizations for CUDA or its successors OpenCL and DirectCompute. We discuss the effect and optimization principles of memory coalescing, bandwidth reduction, processor occupancy, bank conflict reduction, local memory elimination and instruction optimization. The effect of the optimization steps are illustrated by state-of-the-art examples. A comparison with optimized and unoptimized algorithms is provided. A first example discusses the construction of joint histograms using shared memory, where optimizations lead to a significant speedup compared to the original implementation. A second example presents convolution and the acquired results.

Procedings of the British Machine Vision Conference 2008, 2008
The shape and reflectance of complex objects, for use in computer graphics applications, cannot a... more The shape and reflectance of complex objects, for use in computer graphics applications, cannot always be acquired using specialized equipment due to cost or pratical considerations. We want to provide an easy and cost-effective way for the approximate recovery of both shape and spatially-varying reflectance of objects using commodity hardware. In this paper, we present an image-based technique for recovering 3D shape and spatially-varying reflectance properties from a sparse set of photographs, taken under varying illumination. Our technique models the reflectance with a set of low-parameter BRDFs without knowledge of the location of the light-sources or camera. This results an a flexible and portable system that can be used in the field. We successfully apply the approach to several objects (synthetic and real), recovering shape and reflectance. The acquired information can then be used to render the object with modifications to geometry and lighting via traditional rendering methods.

Applied Sciences, 2020
Light field 3D displays require a precise alignment between the display source and the micromirro... more Light field 3D displays require a precise alignment between the display source and the micromirror-array screen for error free 3D visualization. Hence, calibrating the system using an external camera becomes necessary, before displaying any 3D contents. The inter-dependency of the intrinsic and extrinsic parameters of display-source, calibration-camera, and micromirror-array screen, makes the calibration process very complex and error-prone. Thus, several assumptions are made with regard to the display setup, in order to simplify the calibration. A fully automatic calibration method based on several such assumptions was reported by us earlier. Here, in this paper, we report a method that uses no such assumptions, but yields a better calibration. The proposed method adapts an optical solution where the micromirror-array screen is fabricated as a computer generated hologram with a tiny diffuser engraved at one corner of each elemental micromirror in the array. The calibration algorith...

Proceedings of the 10th International Conference on Signal Processing and Multimedia Applications and 10th International Conference on Wireless Information Networks and Systems, 2013
In this paper, we present a system to increase performance of plane sweeping for free viewpoint i... more In this paper, we present a system to increase performance of plane sweeping for free viewpoint interpolation. Typical plane sweeping approaches incorporate a uniform depth plane distribution to investigate different depth hypotheses to generate a depth map, used in novel camera viewpoint generation. When the scene consists of a sparse number of objects, some depth hypotheses do not contain objects and can cause noise and wasted computational power. Therefore, we propose a method to adapt the plane distribution to increase the quality of the depth map around objects and to reduce computational power waste by reducing the number of planes in empty spaces in the scene. First, we generate the cumulative histogram of the previous frame in a temporal sequence of images. Next, we determine a new normalized depth for every depth plane by analyzing the cumulative histogram. Steep sections of the cumulative histogram will result in a dense local distribution of planes; a flat section will result in a sparse distribution. The results, performed on controlled and on real images, demonstrate the effectiveness of the method over a uniform distribution and allows a lower number of depth planes, and thus a more performant processing, for the same quality.

Proceedings of Identification of dark matter 2008 — PoS(idm2008), 2009
Gravitational lensing provides a direct means for measuring the masses of galaxies and galaxy clu... more Gravitational lensing provides a direct means for measuring the masses of galaxies and galaxy clusters. Provided that enough constraints are available, one might even hope to obtain a handle on the precise distribution of the mass, which in turn may reveal information about the spatial distribution of the dark matter. We present an approach using genetic algorithms, allowing the user to 'breed' solutions which are compatible with available strong lensing data. The procedure allows various types of constraints to be used, including positional information, null-space information, and time-delay measurements. The method is non-parametric in the sense that it does not assume a particular shape of the mass distribution. This is accomplished by placing circularly symmetric basis functions -projected Plummer spheres -on a dynamic grid in the lens plane. Using simulations, we show that our procedure is able construct a mass distribution and source positions that are compatible with a given set of observations. We discuss the degeneracies that are inherent to lens inversion (and hence any lens inversion technique) and that limit the potential of strong lensing to yield precise estimates of the dark-matter distribution. We show how these degeneracies cause most of the differences between inversions of the same lensing cluster by different authors, using the famous cluster Cl 0024+1654 as a working example.
Electronic Workshops in Computing, 2008
Multi-touch interaction techniques are becoming more widespread because of new industrial initiat... more Multi-touch interaction techniques are becoming more widespread because of new industrial initiatives to make this hardware available and affordable for the consumer market. To cope with the diversity in hardware setups and the lack of knowledge about developing generic multitouch applications, we created a framework, Eunomia, for abstracting the hardware from the software and to enable software developers to easily develop interactive applications taking advantage of multi-touch interaction. We describe our first set of applications created on top of this framework that are targeted for public spaces. During the deployment of these applications, we were able to observe users that are confronted with multi-touch technologies in a public space.

Optics letters, 2018
Concave micro-mirror arrays fabricated as holographic optical elements are used in projector-base... more Concave micro-mirror arrays fabricated as holographic optical elements are used in projector-based light field displays due to their see-through characteristics. The optical axes of each micro-mirror in the array are usually made parallel to each other, which simplifies the fabrication, integral image rendering, and calibration process. However, this demands that the beam from the projector be collimated and made parallel to the optical axis of each elemental micro-mirror. This requires additional collimation optics, which puts serious limitations on the size of the display. In this Letter, we propose a solution to the above issue by introducing a new method to fabricate holographic concave micro-mirror array sheets and explain how they work in detail. 3D light field reconstructions of the size 20  cm×10  cm and 6 cm in depth are achieved using a conventional projector without any collimation optics.

2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012
This paper describes a novel strategy to enhance underwater videos and images. Built on the fusio... more This paper describes a novel strategy to enhance underwater videos and images. Built on the fusion principles, our strategy derives the inputs and the weight measures only from the degraded version of the image. In order to overcome the limitations of the underwater medium we define two inputs that represent color corrected and contrast enhanced versions of the original underwater image/frame, but also four weight maps that aim to increase the visibility of the distant objects degraded due to the medium scattering and absorption. Our strategy is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. Our fusion framework also supports temporal coherence between adjacent frames by performing an effective edge preserving noise reduction strategy. The enhanced images and videos are characterized by reduced noise level, better exposedness of the dark regions, improved global contrast while the finest details and edges are enhanced significantly. In addition, the utility of our enhancing technique is proved for several challenging applications.

Nonuniform depth distribution selection with discrete Fourier transform
ACM SIGGRAPH 2016 Posters on - SIGGRAPH '16, 2016
In recent years there is a growing interest in the generation of virtual views from a limited set... more In recent years there is a growing interest in the generation of virtual views from a limited set of input cameras. This is especially useful for applications such as Free Viewpoint Navigation and light field displays [Tanimoto 2015]. The latter often requires tens to hundreds of input views, while it is often not feasible to record with as many cameras. View interpolation algorithms often traverse a set of depths to find correspondences between the input images [Stankiewicz et al. 2013; Goorts et al. 2013]. Most algorithms choose a uniform set of depths to traverse (as shown in Figure 2(a)), but this often leads to an excessive amount of unnecessary calculations in regions where no objects are located. It also results in an increased amount of mismatches, and thus, inaccuracies in the generated views. These problems also occur when a too large depth range is selected. Hence, typically a depth range that encloses the scene tightly is manually selected to mitigate these errors. A depth distribution that organizes the depth layers around the objects in the scene, as shown in Figure 2(b), would reduce these errors and decrease the number of computations by reducing the number of depths to search through. [Goorts et al. 2013] determine a nonuniform global depth distribution by reusing the generated depth information from the previous time stamp. This makes the algorithm dependent on previous results.
Extensible scene graph manager

The inversion of a gravitational lens system is, as is well known, plagued by the so-called mass-... more The inversion of a gravitational lens system is, as is well known, plagued by the so-called mass-sheet degeneracy: one can always rescale the density distribution of the lens and add a constant-density mass-sheet such that the, also properly rescaled, source plane is projected onto the same observed images. For strong lensing systems, it is often claimed that this degeneracy is broken as soon as two or more sources at different redshifts are available. This is definitely true in the strict sense that it is then impossible to add a constant-density mass-sheet to the rescaled density of the lens without affecting the resulting images. However, often one can easily construct a more general mass distribution -- instead of a constant-density sheet of mass -- which gives rise to the same effect: a uniform scaling of the sources involved without affecting the observed images. We show that this can be achieved by adding one or more circularly symmetric mass distributions, each with its own center of symmetry, to the rescaled mass distribution of the original lens. As it uses circularly symmetric distributions, this procedure can lead to the introduction of ring shaped features in the mass distribution of the lens. In this paper, we show explicitly how degenerate inversions for a given strong lensing system can be constructed. It then becomes clear that many constraints are needed to effectively break this degeneracy.
Uploads
Papers by Philippe Bekaert