Render Engine Architecture
Render Engine Architecture

Render Engine Architecture

Created by
Tags
Lumiere
C++
WIP
Article
2
A render engine is a complex, intertwined piece of software. Therefor, in this first “real” blog post, I want to take the time to explore a variety of features, architecture decisions and the (future) roadmap for Lumiere. My vision for Lumiere is twofold: it should be capable of both online (i.e. real-time applications) and offline rendering. While this may sound daunting or even redundant, it is still a good practice for myself to learn different graphics pipelines and algorithms along the way (keep in mind that Lumiere still is a personal project).

Lumiere as an Offline Render Engine

The difference between on- and offline rendering are significant. In online rendering, the data in the form of 3D models and textures have to be continuously uploaded to the GPU and the resulting frames should be ready in an acceptable time frame for real-time use (~60 FPS). Aiming at this goal, 60 FPS gives us a time budget of 16.7 milliseconds per frame, which is definitely not a lot to perform ray-tracing computations (and for the users of the render engine to perform other game logic).
 
The focus of Lumiere, however, is to produce images for video production, motion graphics, photography, etc. This is why, for now at least, Lumiere will be an offline renderer, meaning we are not restricted to a specific time budget in terms of FPS. Of course, this does not take away from the fact that we want our resulting image to be computed in an optimized way that is as fast as possible. To aid in this goal, several well-known concepts in the path-tracing universe will certainly be considered such as:
  • Separation of geometry types for primitives, analytical and procedural geometries.
  • BVH traversal for faster ray-geometry intersection computations.
  • Wavefront path tracing on the GPU.
  • Different BxDF implementations for a variety of material types.
  • Using spectral color representations instead of the typical tristimulus color representations. This also introduces the ability to support features such as spectral materials and illuminants are to be considered.
  • A wide variety of optimized light transport algorithms including Monte-Carlo (MC), Markov-Chain Monte Carlo (MCMC) and Global Illumination (GI) methods, together with several optimizations found in literature.
  • Different camera systems and sensor implementations that will allow to give rendered images a specific look and feel.
In short, Lumiere still has to be considered to be a personal project, with an ambitious goal of achieving state-of-the-art graphics, while maintaining a modular and extensible framework for other developers. In the remainder of this particular post, we will dive a little deeper into the requirements for each of the mentioned concepts. At the end of this post, we will talk about the architecture of the render engine itself and what the future of these blog posts will look like.

Geometry Types

In today’s world there is a need to support a wide array of different geometry types. For example, on the one hand, movie scenes often require thousands of complex, highly detailed polygon models to be rendered. Motion graphics on the other hand often relies on procedurally generated geometry such as splines or particle effects. Ideally, Lumiere supports this large range of different geometries. Currently, the plan is to support three different types of geometry categories. This separation is almost automatically induced by how ray-geometry intersections work for the geometries in these categories. The categories that are currently planned to be supported in Lumiere are as follows:
  • Simple primitives (e.g. points, lines, spheres, planes). These are especially useful in debug rendering. Since these primitives lend themselves really well to ray tracing, in some cases it will be beneficial for scenes to use these primitives (e.g. a dome-shaped skybox is essentially a sphere).
  • Analytic geometry (e.g. triangles and quads). They are the main types of geometry we will (most probably) encounter when rendering a scene. Models are typically stored as meshes that store model the model data as triangles. Since scenes can contain up to millions of triangles, we need a way to check if a ray hits any of these triangles. Support for this type of geometry will include support for accelerations structures such as bounding volume hierarchies (BVHs). For analytical geometry, we can implement exact ray-geometry intersection algorithms (e.g. Möller–Trumbore).
  • Procedural geometry (e.g. hair, particles, noise fields). These are some more complicated types of geometry, exact analytical solutions are hard (or even impossible) to obtain. Some renderers use polygonization algorithms to transform these procedural geometries into triangles to allow for exact ray-geometry intersections. While this is preferable in real-time settings, this approach is inaccurate and not desired in an offline renderer. Therefor, we will have to use a mixture of intersection algorithms such as ray-tracing for analytical geometries and ray-marching (in the form of sphere tracing, or an optimized version) for procedural geometries. The intersection distances can easily be checked against each other to see which one was hit first.
  • Volumetrics (e.g. smoke, clouds, fire, fluids). Typically, volumes are represented as voxel grids or procedurally evaluated densities. We will have to rely on some sort of tracking method and transmittance estimation to properly trace rays across the scene. Ideally, we would like to integrate some grid acceleration structure (e.g. NanoVDB) to accelerate volume rendering.
From summing up the geometries alone, we can see modularity will be key in designing our render engine. Support for different types of geometries requires the implementation of different intersection methods, integrator types, etc.

Bounding Volume Hierarchies

Wavefront Path Tracing

Materials and BxDFs

Spectral Rendering

Light Transport Algorithms

Camera System & Sensor