This post is the official announcement of the public release of the Lighthouse 2 rendering platform. The project can be found on Github:
Lighthouse rendering technology has been in development for a while now at Utrecht University. It has served as a starting point for several student projects, including the work by Victor Voorhuis on SVGF, Kevin van Mastrigt’s Q-learned IS work and several ongoing projects aimed at the production of OpenCL and Embree render cores.
Lighthouse 2 is the successor to Lighthouse 1, which was mostly a classical CUDA path tracer, with custom BVH management and near-real-time performance. The arrival of RTX hardware made the ‘everything custom’ approach obsolete. Optix is very efficient at these tasks, but more important, the RTX cores are not exposed to CUDA at all, except though Optix (DX12 and Vulkan also use Optix under the hood).
Reaching peak performance on NVidia’s RTX hardware thus requires adaptation of an API, which for me personally is a big deal. Especially once I found out that Optix Prime (which seems particularly suitable for RTX) does not and will not support the ray tracing cores. And, that Optix used in the suggested way is not nearly as performant as Optix Prime. I ended up with two renderers, both implementing the wavefront algorithm (streaming path tracing, as it was originally named by Dietger van Antwerpen). Optix was simply used for ray tracing and nothing else; between tracing batches of rays, the two renderers used virtually identical CUDA code to process the intersection results. This naturally led to the concept of render cores, fed by a generic RenderSystem, which takes care of common functionality, such as scene management.
“Rays per second is measured in millions, not thousands”, a wise man once said (Ingo Wald). The Arauna real-time ray tracer could do 500M rays/s on a dual-Xeon system in 2012; more recent renderers such as Brigade 2 and Arauna2 reach 300M even in path tracing. I mention this because these are much higher numbers than the budgets typically assumed in DX12 ray tracing. By the way, the 10G+ numbers some sources mention are for primary rays without shading, not for throughput in a practical system. As a result, ray tracing is limited to a role as a gimmick for now.
It doesn’t have to be though. A proper renderer can reach close to 1G on RTX hardware. On a 720p display, that yields 8 rays per pixel at 60fps, i.e. enough for two path traced samples. Those samples recently got significantly better with blue noise patterns; add some filtering, reprojection, AI upsampling and pure path tracing suddenly seems quite viable. And that is just for the first generation of RTX; undoubtedly NVidia, AMD and Intel are preparing something better as we speak.
It is thus imporant to experiment with pure path tracing. A pure path tracer is elegant, efficient in terms of code and maintenance, and intuitive to use for artists. It’s great that ray tracing is making its way in games, but the obvious next step is a full replacement of rasterization. Experiments in that direction focus on filtering, sampling, estimators, many lights, dynamic geometry and so on. Such experiments are more useful when they are carried out in a framwork that can actually reach peak performance, and thus not in an environment such as DX12, which primarily exists to ease the transition from rasterization to ray tracing by reducing the gap between the ‘old ways’ and ‘the right thing’. TODO: insert smiley for nuance.
This is where Lighthouse 2 comes in. For experiments with estimators and BSDFs we have Wenzel Jakob’s excellent Nori and Mitsuba (and the recently announced interactive fork) and PBRT v3. There is no way that I can match the quality of these products, for now. Nevertheless, what seems to be missing is a real-time platform.
‘Real-time’ comes with its own challenges. There is the usual things that we don’t want to be bothered with when testing out a cool morning shower thought: setting up a window, loading a scene, creating a UI. But real-time adds to that: setting up CUDA and/or OpenCL, initializing Optix, interfacing between C/C++ and the GPGPU code, GL-interop, and so on. For some this may come natural, for others its a hassle that spoils the fun and kills creativity. Other times, you just want to focus on the BSDF, and the renderer should just be given. Or, the other way round, using a ‘it will do’ BSDF you want to dabble with some many-lights ideas. Most of that will be just fine in the existing solutions, but sometimes, you want to test your blazingly fast shader in a blazingly fast renderer. Without building it first.
Lighthouse 2 consists of three main layers:
- The application layer: use an existing application, or write your own, if you want to program some game logic or animation for a real-time ray tracer. Your application will work with all the available cores, and once new cores become available, it will work with those too.
- The application layer talks to the RenderSystem. This module is responsible for the host-side representation of the scene, including obj, fbx and gltf loading and scene graph management. The RenderSystem will ensure that the render cores receive updates, and to keep that efficient it keeps track of any changed scene objects.
- The render cores are the low level rendering blocks. A render core receives meshes, instances, textures, materials and lights from the RenderSystem, and from there, the core does its magic. Basic unidirectional path tracing cores for Optix and Optix Prime are provided. A software rasterizer is also provided for reference.
The render cores are typically what you will want to work on if you decide that Lighthouse 2 may be a good match for you. In a typical workflow, you clone a core (there’s a .docx on that in the repo) that is close to what you have in mind, your remove things you don’t need, and from there you modify the core. The resulting project compiles to a dll, which you can share in binary form, or you can share the source in a separate repository, for others to enjoy.
For now, Lighthouse 2 has been released. In the near future the plan is to add a couple of blog posts to detail the existing cores. A walk-through of the wavefront path tracers is probably useful. Also in the near future I hope to release some student work, involving an OpenCL core and an Embree core. Both cores are in pretty good shape, and interesting starting points depending on your situation.