A Sliced Wasserstein Loss for Neural Texture Synthesis
Eric Heitz, Kenneth Vanhoey, Thomas Chambon, Laurent Belcour - To appear in CVPR 2021
We address the problem of computing a textural loss based on the statistics extracted from the feature activations of a convolutional neural network optimized for object recognition (e.g. VGG-19). The underlying mathematical problem is the measure of the distance between two distributions in feature space. The Gram-matrix loss is the ubiquitous approximation for this problem but it is subject to several shortcomings. Our goal is to promote the Sliced Wasserstein Distance as a replacement for it. It is theoretically proven,practical, simple to implement, and achieves results that are visually superior for texture synthesis by optimization or training generative neural networks.

Improved Shader and Texture Level of Detail Using Ray Cones
Tomas Akenine-Möller, Cyril Crassin, Jakub Boksansky, Laurent Belcour, Alexey Panteleev, Oli Wright - Published in Journal of Computer Graphics Techniques (JCGT)
In real-time ray tracing, texture filtering is an important technique to increase image quality. Current games, such as Minecraft with RTX on Windows 10, use ray cones to determine texture-filtering footprints. In this paper, we present several improvements to the ray-cones algorithm that improve image quality and performance and make it easier to adopt in game engines. We show that the total time per frame can decrease by around 10% in a GPU-based path tracer, and we provide a public-domain implementation.

Bringing an Accurate Fresnel to Real-Time Rendering: a Preintegrable Decomposition
Laurent Belcour, Megane Bati, Pascal Barla - Published in ACM SIGGRAPH 2020 Talks and Courses
We introduce a new approximate Fresnel reflectance model that enables the accurate reproduction of ground-truth reflectance in real-time rendering engines. Our method is based on an empirical decomposition of the space of possible Fresnel curves. It is compatible with the preintegration of image-based lighting and area lights used in real-time engines. Our work permits to use a reflectance parametrization [Gulbrandsen 2014] that was previously restricted to offline rendering.

Concurrent Binary Trees
Jonathan Dupuy - HPG 2020
We introduce the concurrent binary tree (CBT), a novel concurrent representation to build and update arbitrary binary trees in parallel. Fundamentally, our representation consists of a binary heap, i.e., a 1D array, that explicitly stores the sum-reduction tree of a bitfield. In this bitfield, each one-valued bit represents a leaf node of the binary tree encoded by the CBT, which we locate algorithmically using a binary-search over the sum-reduction. We show that this construction allows to dispatch down to one thread per leaf node and that, in turn, these threads can safely split and/or remove nodes concurrently via simple bitwise operations over the bitfield. The practical benefit of CBTs lies in their ability to accelerate binary-tree-based algorithms with parallel processors. To support this claim, we leverage our representation to accelerate a longest-edgebisection-based algorithm that computes and renders adaptive geometry for large-scale terrains entirely on the GPU. For this specific algorithm, the CBT accelerates processing speed linearly with the number of processors.

Can’t Invert the CDF? The Triangle-Cut Parameterization of the Region under the Curve
Eric Heitz - EGSR 2020
We present an exact, analytic and deterministic method for sampling densities whose Cumulative Distribution Functions (CDFs) cannot be inverted analytically. Indeed, the inverse-CDF method is often considered the way to go for sampling non-uniform densities. If the CDF is not analytically invertible, the typical fallback solutions are either approximate, numerical, or nondeterministic such as acceptance-rejection. To overcome this problem, we show how to compute an analytic area-preserving parameterization of the region under the curve of the target density. We use it to generate random points uniformly distributed under the curve of the target density and their abscissae are thus distributed with the target density. Technically, our idea is to use an approximate analytic parameterization whose error can be represented geometrically as a triangle that is simple to cut out. This triangle-cut parameterization yields exact and analytic solutions to sampling problems that were presumably not analytically resolvable.

Rendering Layered Materials with Anisotropic Interfaces
Philippe Weier, Laurent Belcour - Published in Journal of Computer Graphics Techniques (JCGT)
We present a lightweight and efficient method to render layered materials with anisotropic interfaces. Our work extends our previously published statistical framework to handle anisotropic microfacet models. A key insight to our work is that when projected on the tangent plane, BRDF lobes from an anisotropic GGX distribution are well approximated by ellipsoidal distributions aligned with the tangent frame: its covariance matrix is diagonal in this space. We leverage this property and perform the isotropic layered algorithm on each anisotropy axis independently. We further update the mapping of roughness to directional variance and the evaluation of the average reflectance to account for anisotropy.
Integration and Simulation of Bivariate Projective-Cauchy Distributions within Arbitrary Polygonal Domains
Jonathan Dupuy, Laurent Belcour & Eric Heitz - Technical Report 2019
Consider a uniform variate on the unit upper-half sphere of dimension d. It is known that the straight-line projection through the center of the unit sphere onto the plane above it distributes this variate according to a d-dimensional projective-Cauchy distribution. In this work, we leverage the geometry of this construction in dimension d=2 to derive new properties for the bivariate projective-Cauchy distribution. Specifically, we reveal via geometric intuitions that integrating and simulating a bivariate projective-Cauchy distribution within an arbitrary domain translates into respectively measuring and sampling the solid angle subtended by the geometry of this domain as seen from the origin of the unit sphere. To make this result practical for, e.g., generating truncated variants of the bivariate projective-Cauchy distribution, we extend it in two respects. First, we provide a generalization to Cauchy distributions parameterized by location-scale-correlation coefficients. Second, we provide a specialization to polygonal-domains, which leads to closed-form expressions. We provide a complete MATLAB implementation for the case of triangular domains, and briefly discuss the case of elliptical domains and how to further extend our results to bivariate Student distributions.

Surface Gradient Based Bump Mapping Framework
Morten Mikkelsen 氏(2020 年)
Multi-Stylization of Video Games in Real-Time guided by G-buffer Information
Adèle Saint-Denis, Kenneth Vanhoey, Thomas Deliot HPG 2019
We investigate how to take advantage of modern neural style transfer techniques to modify the style of video games at runtime. Recent style transfer neural networks are pre-trained, and allow for fast style transfer of any style at runtime. However, a single style applies globally, over the full image, whereas we would like to provide finer authoring tools to the user. In this work, we allow the user to assign styles (by means of a style image) to various physical quantities found in the G-buffer of a deferred rendering pipeline, like depth, normals, or object ID. Our algorithm then interpolates those styles smoothly according to the scene to be rendered: e.g., a different style arises for different objects, depths, or orientations.

Distributing Monte Carlo Errors as a Blue Noise in Screen Space by Permuting Pixel Seeds Between Frames
Eric Heitz, Laurent Belcour - EGSR 2019
We introduce a sampler that generates per-pixel samples achieving high visual quality thanks to two key properties related to the Monte Carlo errors that it produces. First, the sequence of each pixel is an Owen-scrambled Sobol sequence that has state-of-the-art convergence properties. The Monte Carlo errors have thus low magnitudes. Second, these errors are distributed as a blue noise in screen space. This makes them visually even more acceptable. Our sam-pler is lightweight and fast. We implement it with a small texture and two xor operations. Our supplemental material provides comparisons against previous work for different scenes and sample counts.

『A Low-Discrepancy Sampler that Distributes Monte Carlo Errors as a Blue Noise in Screen Space』
Eric Heitz, Laurent Belcour - ACM SIGGRAPH Talk 2019
We introduce a sampler that generates per-pixel samples achieving high visual quality thanks to two key properties related to the Monte Carlo errors that it produces. First, the sequence of each pixel is an Owen-scrambled Sobol sequence that has state-of-the-art convergence properties. The Monte Carlo errors have thus low magnitudes. Second, these errors are distributed as a blue noise in screen space. This makes them visually even more acceptable. Our sampler is lightweight and fast. We implement it with a small texture and two xor operations. Our supplemental material provides comparisons against previous work for different scenes and sample counts.

A Low-Distortion Map Between Triangle and Square
Eric Heitz - Tech Report 2019
We introduce a low-distortion map between triangle and square.This mapping yields an area-preserving parameterization that can be used for sampling random points with a uniform density in arbitrary triangles.This parameterization presents two advantages compared to the square-root param-eterization typically used for triangle sampling.First, it has lower distortions and better preserves the blue-noise properties of the input samples.Second, its computation relies only on arithmetic operations (+, *), which makes it faster to evaluate.
GGX 可視法線分布のサンプリング
Eric Heitz - JCGT 2018
Importance sampling microfacet BSDFs using their Distribution of Visible Normals (VNDF) yields significant variance reduction in Monte Carlo rendering. In this article, we describe an efficient and exact sampling routine for the VNDF of the GGX microfacet distribution. This routine leverages the property that GGX is the distribution of normals of a truncated ellipsoid and sampling the GGX VNDF is equivalent to sampling the 2D projection of this truncated ellipsoid. To do that, we simplify the problem by using the linear transformation that maps the truncated ellipsoid to a hemisphere. Since linear transformations preserve the uniformity of projected areas, sampling in the hemisphere configuration and transforming the samples back to the ellipsoid configuration yields valid samples from the GGX VNDF.

Analytical Calculation of the Solid Angle Subtended by an Arbitrarily Positioned Ellipsoid to a Point Source
Eric Heitz - Nuclear Instruments and Methods in Physics Research 2018
We present a geometric method for computing an ellipse that subtends the same solid-angle domain as an arbitrarily positioned ellipsoid. With this method we can extend existing analytical solid-angle calculations of ellipses to ellipsoids. Our idea consists of applying a linear transformation on the ellipsoid such that it is transformed into a sphere from which a disk that covers the same solid-angle domain can be computed. We demonstrate that by applying the inverse linear transformation on this disk we obtain an ellipse that subtends the same solid-angle domain as the ellipsoid. We provide a MATLAB implementation of our algorithm and we validate it numerically.
A note on track-length sampling with non-exponential distributions
Eric Heitz, Laurent Belcour - Tech Report 2018
Track-length sampling is the process of sampling random intervals according to a distance distribution. It means that, instead of sampling a punctual distance from the distance distribution, track-length sampling generates an interval of possible distances.The track-length sampling process is correct if the expectation of the intervals is the target distance distribution. In other words, averaging all the sampled intervals should converge towards the distance distribution as their number increases. In this note, we emphasize that the distance distribution that is used for sampling punctual distances and the track-length distribution that is used for sampling intervals are not the same in general. This difference can be surprising because, to our knowledge, track-length sampling has been mostly studied in the context of transport theory where the distance distribution is typically exponential: in this special case, the distance distribution and the track-length distribution happens to be both the same exponential distribution. However, they are not the same in general when the distance distribution is non-exponential.

Combining Analytic Direct Illumination and Stochastic Shadows
Eric Heitz, Stephen Hill (Lucasfilm), Morgan McGuire (NVIDIA) - I3D 2018 (short paper) (Best Paper Presentation Award)
In this paper, we propose a ratio estimator of the direct-illumination equation that allows us to combine analytic illumination techniques with stochastic raytraced shadows while maintaining correctness. Our main contribution is to show that the shadowed illumination can be split into the product of the unshadowed illumination and the illumination-weighted shadow. These terms can be computed separately — possibly using different techniques — without affecting the exactness of the final result given by their product. This formulation broadens the utility of analytic illumination techniques to raytracing applications, where they were hitherto avoided because they did not incorporate shadows. We use such methods to obtain sharp and noise-free shading in the unshadowed-illumination image and we compute the weighted-shadow image with stochastic raytracing. The advantage of restricting stochastic evaluation to the weighted-shadow image is that the final result exhibits noise only in the shadows. Furthermore, we denoise shadows separately from illumination so that even aggressive denoising only overblurs shadows, while high-frequency shading details (textures, normal maps, etc.) are preserved.

Non-Periodic Tiling of Procedural Noise Functions
Aleksandr Kirillov - HPG 2018
Procedural noise functions have many applications in computer graphics, ranging from texture synthesis to atmospheric effect simulation or to landscape geometry specification. Noise can either be precomputed and stored into a texture, or evaluated directly at application runtime. This choice offers a trade-off between image variance, memory consumption and performance.
Advanced tiling algorithms can be used to decrease visual repetition. Wang tiles allow a plane to be tiled in a non-periodic way, using a relatively small set of textures. Tiles can be arranged in a single texture map to enable the GPU to use hardware filtering.
In this paper, we present modifications to several popular procedural noise functions that directly produce texture maps containing the smallest complete Wang tile set. The findings presented in this paper enable non-periodic tiling of these noise functions and textures based on them, both at runtime and as a preprocessing step. These findings also allow decreasing repetition of noise-based effects in computer-generated images at a small performance cost, while maintaining or even reducing the memory consumption.

High-Performance By-Example Noise using a Histogram-Preserving Blending Operator
Eric Heitz, Fabrice Neyret (Inria) - HPG 2018 (Best Paper Award)
We propose a new by-example noise algorithm that takes as input a small example of a stochastic texture and synthesizes an infinite output with the same appearance. It works on any kind of random-phase inputs as well as on many non-random-phase inputs that are stochastic and non-periodic, typically natural textures such as moss, granite, sand, bark, etc. Our algorithm achieves high-quality results comparable to state-of-the-art procedural-noise techniques but is more than 20 times faster

Unsupervised Deep Single-Image Intrinsic Decomposition using Illumination-Varying Image Sequences
Louis Lettry (ETH Zürich), Kenneth Vanhoey, Luc Van Gool (ETH Zürich) - Pacific Graphics 2018 / Computer Graphics Forum
Intrinsic Decomposition decomposes a photographed scene into albedo and shading. Removing shading allows to "delight" images, which can then be reused in virtually relit scenes. We propose an unsupervised learning method to solve this problem.
Recent techniques use supervised learning: it requires a large set of known decompositions, which are hard to obtain. Instead, we train on unannotated images by using time lapse imagery gained from static webcams. We exploit the assumption that albedo is static by definition, and shading varies with lighting. We transcribe this into a siamese training for deep learning.

Efficient Rendering of Layered Materials using an Atomic Decomposition with Statistical Operators
Laurent Belcour - ACM SIGGRAPH 2018
We derive a novel framework for the efficient analysis and computation of light transport within layered materials. Our derivation consists of two steps. First, we decompose light transport into a set of atomic operators that act on its directional statistics. Specically, our operators consist of reflection, refraction, scattering, and absorption, whose combinations are sufficient to describe the statistics of light scattering multiple times within layered structures. We show that the first three directional moments (energy, mean and variance) already provide an accurate summary. Second, we extend the adding-doubling method to support arbitrary combinations of such operators eciently. During shading, we map the directional moments to BSDF lobes. We validate that the resulting BSDF closely matches the ground truth in a lightweight and efficient form. Unlike previous methods, we support an arbitrary number of textured layers, and demonstrate a practical and accurate rendering of layered materials with both an offline and real-time implementation that are free from per-material precomputation.

An Adaptive Parameterization for Material Acquisition and Rendering
Jonathan Dupuy and Wenzel Jakob (EPFL) - ACM SIGGRAPH Asia 2018
One of the key ingredients of any physically based rendering system is a detailed specification characterizing the interaction of light and matter of all materials present in a scene, typically via the Bidirectional Reflectance Distribution Function (BRDF). Despite their utility, access to real-world BRDF datasets remains limited: this is because measurements involve scanning a four-dimensional domain at sufficient resolution, a tedious and often infeasible time-consuming process. We propose a new parameterization that automatically adapts to the behavior of a material, warping the underlying 4D domain so that most of the volume maps to regions where the BRDF takes on non-negligible values, while irrelevant regions are strongly compressed. This adaptation only requires a brief 1D or 2D measurement of the material’s retro-reflective properties. Our parameterization is unified in the sense that it combines several steps that previously required intermediate data conversions: the same mapping can simultaneously be used for BRDF acquisition, storage, and it supports efficient Monte Carlo sample generation.
Paper
Video
Isotropic BRDF Dataset
Anisotropic BRDF Dataset
MERL Database Validation
C++ & Python code
Material Database

Stochastic Shadows
Eric Heitz, Stephen Hill (Lucasfilm), Morgan McGuire (NVIDIA)
In this paper, we propose a ratio estimator of the direct-illumination equation that allows us to combine analytic illumination techniques with stochastic raytraced shadows while maintaining correctness. Our main contribution is to show that the shadowed illumination can be split into the product of the unshadowed illumination and the illumination-weighted shadow. These terms can be computed separately — possibly using different techniques — without affect- ing the exactness of the final result given by their product.
This formulation broadens the utility of analytic illumination tech- niques to raytracing applications, where they were hitherto avoided because they did not incorporate shadows. We use such methods to obtain sharp and noise-free shading in the unshadowed-illumination image and we compute the weighted-shadow image with stochastic raytracing. The advantage of restricting stochastic evaluation to the weighted-shadow image is that the final result exhibits noise only in the shadows. Furthermore, we denoise shadows separately from illumination so that even aggressive denoising only overblurs shad- ows, while high-frequency shading details (textures, normal maps, etc.) are preserved.

Adaptive GPU Tessellation with Compute Shaders
Jad Khoury、Jonathan Dupuy、Christophe Riccio - GPU Zen 2
GPU ラスタライザーは、プリミティブが数ピクセル以上に投影されるときに最も効率的です。この制限を下回ると、Z バッファがエイリアシングを開始し、シェーディングレートが大幅に減ります[Riccio 12]。これにより、適度に離れたポリゴンがすべてサブピクセルサイズに投影されるため、形状が複雑なシーンのレンダリングが困難になります。そのようなサブピクセル投影を最小限に抑える簡単な解決策は、カメラに近づくにつれて粗メッシュをプロシージャルに調整することにあります。この章では、任意のポリゴンメッシュに対してそのようなプロシージャル調整手法を派生させることに注目します。

Real-Time Line- and Disk-Light Shading with Linearly Transformed Cosines
Eric Heitz (Unity Technologies) and Stephen Hill (Lucasfilm) - ACM SIGGRAPH Courses 2017
We recently introduced a new real-time area-light shading technique dedicated to lights with polygonal shapes. In this talk, we extend this area-lighting framework to support lights shaped as lines, spheres and disks in addition to polygons.
Slides
Demo code
WebGL demo for quad, line and disk lights

Microfacet-based Normal Mapping for Robust Monte Carlo Path Tracing
Vincent Schüssler (KIT), Eric Heitz (Unity Technologies), Johannes Hanika (KIT) and Carsten Dachsbacher (KIT) - ACM SIGGRAPH ASIA 2017
Normal mapping imitates visual details on surfaces by using fake shading normals. However, the resulting surface model is geometrically impossible and normal mapping is thus often considered a fundamentally flawed approach with unavoidable problems for Monte Carlo path tracing: it breaks either the appearance (black fringes, energy loss) or the integrator (different forward and backward light transport). In this paper, we present microfacet-based normal mapping, an alternative way of faking geometric details without corrupting the robustness of Monte Carlo path tracing such that these problems do not arise.

A Spherical Cap Preserving Parameterization for Spherical Distributions
Jonathan Dupuy, Eric Heitz and Laurent Belcour - ACM SIGGRAPH 2017
We introduce a novel parameterization for spherical distributions that is based on a point located inside the sphere, which we call a pivot. The pivot serves as the center of a straight-line projection that maps solid angles onto the opposite side of the sphere. By transforming spherical distributions in this way, we derive novel parametric spherical distributions that can be evaluated and importance-sampled from the original distributions using simple, closed-form expressions. Moreover, we prove that if the original distribution can be sampled and/or integrated over a spherical cap, then so can the transformed distribution. We exploit the properties of our parameterization to derive efficient spherical lighting techniques for both real-time and offline rendering. Our techniques are robust, fast, easy to implement, and achieve quality that is superior to previous work.

A Practical Extension to Microfacet Theory for the Modeling of Varying Iridescence
Laurent Belcour (Unity), Pascal Barla (Inria) - ACM SIGGRAPH 2017
Thin film iridescence permits to reproduce the appearance of leather. However, this theory requires spectral rendering engines (such as Maxwell Render) to correctly integrate the change of appearance with respect to viewpoint (known as goniochromatism). This is due to aliasing in the spectral domain as real-time renderers only work with three components (RGB) for the entire range of visible light. In this work, we show how to anti-alias a thin-film model, how to incorporate it in microfacet theory, and how to integrate it in a real-time rendering engine. This widens the range of reproducible appearances with microfacet models.

Linear-Light Shading with Linearly Transformed Cosines
Eric Heitz, Stephen Hill (Lucasfilm) - GPU Zen (book)
In this book chapter, we extend our area-light framework based on Linearly Transformed Cosines to support linear (or line) lights. Linear lights are a good approximation for cylindrical lights with a small but non-zero radius. We describe how to approximate these lights with linear lights that have similar power and shading, and discuss the validity of this approximation.

A Practical Introduction to Frequency Analysis of Light Transport
Laurent Belcour - ACM SIGGRAPH Courses 2016
Frequency Analysis of Light Transport expresses Physically Based Rendering (PBR) using signal processing tools. It is thus tailored to predict sampling rate, perform denoising, perform anti-aliasing, etc. Many method have been proposed to deal with specific cases of light transport (motion, lenses, etc). This course aims to introduce concepts and present practical application scenario of frequency analysis of light transport in a unified context. To ease the understanding of theoretical elements, frequency analysis will be introduced in pair with an implementation.

Real-Time Polygonal-Light Shading with Linearly Transformed Cosines
Eric Heitz, Jonathan Dupuy, Stephen Hill (Ubisoft), David Neubelt (Ready at Dawn Studios) - ACM SIGGRAPH 2016
Shading with area lights adds a great deal of realism to CG renders. However, it requires solving spherical equations that make it challenging for real-time rendering. In this project, we develop a new spherical distribution that allows us to shade physically based materials with polygonal lights in real-time.
Paper
Slides
MATLAB
Plots and Validation
Comparison against Technicolor's Technique
Demo
WebGL Demo
BRDF fitting code
Video

Additional Progress Towards the Unification of Microfacet and Microflake Theories
Jonathan Dupuy and Eric Heitz - EGSR 2016 (E&I)
We study the links between microfacet and microflake theories from the perspective of linear transport theory. In doing so, we gain additional insights, find several simplifications and touch upon important open questions as well as possible paths forward in extending the unification of surface and volume scattering models. First, we introduce a semi-infinite homogeneous exponential-free-path medium that (a) produces exactly the same light transport as the Smith microsurface scattering model and the inhomogeneous Smith medium that was recently introduced by Heitz et al, and (b) allows us to rederive all the Smith masking and shadowing functions in a simple way. Second, we investigate in detail what new aspects of linear transport theory enable a volume to act like a rough surface. We show that this is mostly due to the use of non-symmetric distributions of normals and explore how the violation of this symmetry impacts light transport within the microflake volume without breaking global reciprocity. Finally, we argue that the surface profiles that would be consistent with very rough Smith microsurfaces have geometrically implausible shapes. To overcome this, we discuss an extension of Smith theory in the volume setting that includes NDFs on the entire sphere in order to produce a single unified reflectance model capable of describing everything from a smooth flat mirror all the way to a semi-infinite isotropically scattering medium with both low and high roughness regimes in between.