The Future is finally here!* (almost)
Deep Learning Super Sampling (DLSS) Ray Reconstruction is making its way into Blender (at least, it's in the works). And the results are, without exaggeration, astonishing.
Unlike "traditional" denoisers, DLSS has a different way of reconstructing an image, and doesn't really care about scene lighting or polygonal complexity. Because of this, it's used in games to "upscale" a rather low-res natively-rendered output to a much higher-res target, like 4K or more. It's very effective and, ironically, despite being an upscaler, nowadays is almost synonymous with the "best quality" rendering preset in games. Simply because more and more devs get lazy and spend less effort on optimization, especially anti-aliasing techniques, simply "offloading" this task to a dedicated (and, unfortunately deeply proprietary) neuralnet upscaler like DLSS.
I bet like me, you have always wondered if this very approach could be used in a traditional 3DCG Creation app like Blender.
Well, wonder no more! Someone took an unfinished alpha version of Blender codebase with a DLSS integration, and compiled it into an actual test build.
Prepare to be amazed.
Remember "Brigade"?
Looking at this I can't help but speculate: if it weren't for the AI boom, shifting GPU manufacturers' focus from the consumer graphics cards, with another 1 or 2 GPU generations, entirely ray-traced games would become a reality. Remember the Brigade demo for Siggraph from 2012?
I remember.
Or this one, from 2013?
And then it all... Faded away somehow. I expected some sort of an announcement to follow, but it was largely forgotten, following the release of Nvidia's RTX series of cards. Which perplexes me to no end, to this day.
Well, 14 years later we may finally get the real-time ray-traced or even path-traced future we've been waiting for. It's far from perfect, and struggles with transparent shaders because it's not a "real" denoiser but more of a fancy post-processing effect. Which makes sense, as it is meant to be used in games, where transparency is handled very differently, and the geometric state of the scene can be properly interpreted by DLSS for spacial re-projection. BTW, If you're curious about real-time rendering techniques used in games, check out my post on this topic.
Still, it could be very useful in many other scenes, especially indoor ones, with multi-bounce, hard to calculate Global Illumination. I hope the video brings a lot of attention to this unfinished feature, and convinces the Blender dev team to place it higher in the priority queue, and work out a deal with Nvidia to somehow make this a reality.
Yes, it would end up as yet another vendor-locked feature, but I'd be the biggest hypocrite in the world, if I didn't say that in this case I just don't care. Nvidia is a monopoly already, the vast majority of 3D workstations are built with their GPUs. So it's only logical to let the users make the most of their hardware.
I want this!