The Future is finally here!* (almost)
Deep Learning Super Sampling (DLSS) Ray Reconstruction is making its way into Blender (at least, it's in the works). And the results are, without exaggeration, astonishing.
Unlike "traditional" denoisers, DLSS has a different way of reconstructing an image, and doesn't really care about scene lighting or polygonal complexity. Because of this, it's used in games to "upscale" a rather low-res natively-rendered output to a much higher-res target, like 4K or more. It's very effective and, ironically, despite being an upscaler, nowadays is almost synonymous with the "best quality" rendering preset in games. Simply because more and more devs get lazy and spend less effort on optimization, especially anti-aliasing techniques, simply "offloading" this task to a dedicated (and, unfortunately deeply proprietary) neuralnet upscaler like DLSS.
I bet like me, you have always wondered if this very approach could be used in a traditional 3DCG Creation app like Blender.
Well, wonder no more! Someone took an unfinished alpha version of Blender codebase with a DLSS integration, and compiled it into an actual test build.
Prepare to be amazed.
Sad news for the open-source and open-standards community coming from the Blender team:
OpenCL rendering support was removed. The combination of the limited Cycles kernel implementation, driver bugs, and stalled OpenCL standard has made maintenance too difficult. We are working with hardware vendors to bring back GPU rendering support on AMD and Intel GPUs, using others APIs.
At the same time CUDA implementation saw noticeable improvements in large part thanks to the better utulization of NVIDIA's own OptiX library:
So NVIDIA wins... TWICE:
GPU kernels and scheduling have been rewritten for better performance, with rendering often between 2-7x faster in real-world scenes.
All of this once again displays the real world difficulties of developing, maintaining and promoting open-source alternatives to commercial Software designed by the manufacturer to utilize the hardware capabilities of their products to the max.
And the end result? We're falling deeper and deeper into the vendor lock trap and as the vendors keep turning their proprietary hardware-interfacing Software and APIs into state-of-art ready-to-use solutions, those which are developed in a "democratic" environment keep tripping over their shoelaces failing to get any traction on the market they set out to provide the alternative on.
This makes me grateful that there do exist open-source APIs that work, like OpenGL and Vulcan. But... Why are we still in a situation where Vulcan, "the next generation graphics and compute API" is still incapable of providing even the same level of functionality as the dreaded OpenCL so that it could finally offer a real alternative to commercial compute APIs? Why didn't Blender team even mention Vulcan as something they would look into as an alternative to OpenCL?
A million dollar question...