rendering


CGI Сoffee

Blender 3.0 Removes OpenCL as Cycles GPU Rendering API

Sad news for the open-source and open-standards community coming from the Blender team:

OpenCL rendering support was removed. The combination of the limited Cycles kernel implementation, driver bugs, and stalled OpenCL standard has made maintenance too difficult. We are working with hardware vendors to bring back GPU rendering support on AMD and Intel GPUs, using others APIs.

At the same time CUDA implementation saw noticeable improvements in large part thanks to the better utulization of NVIDIA's own OptiX library:

  • Rendering of hair as 3D curves (instead of ribbons) is significantly faster, by using the native OptiX curve support
  • OptiX kernel compilation time was significantly reduced.
  • Baking now supports hardware ray-tracing with OptiX.

So NVIDIA wins... TWICE:

GPU kernels and scheduling have been rewritten for better performance, with rendering often between 2-7x faster in real-world scenes.

Why Am I Not Celebrating This?

All of this once again displays the real world difficulties of developing, maintaining and promoting open-source alternatives to commercial Software designed by the manufacturer to utilize the hardware capabilities of their products to the max.

And the end result? We're falling deeper and deeper into the vendor lock trap and as the vendors keep turning their proprietary hardware-interfacing Software and APIs into state-of-art ready-to-use solutions, those which are developed in a "democratic" environment keep tripping over their shoelaces failing to get any traction on the market they set out to provide the alternative on.

This makes me grateful that there do exist open-source APIs that work, like OpenGL and Vulcan. But... Why are we still in a situation where Vulcan, "the next generation graphics and compute API" is still incapable of providing even the same level of functionality as the dreaded OpenCL so that it could finally offer a real alternative to commercial compute APIs? Why didn't Blender team even mention Vulcan as something they would look into as an alternative to OpenCL?

A million dollar question...

Unity and Unreal Engine: Real-Time Rendering VS Traditional 3DCG Rendering Approach

(Revised and updated as of June 2020)

Preamble

Before reading any further, please find the time to watch these. I promise, you won't regret it:

Now let's analyze what we just saw and make some important decisions. Let's begin with how all of this could be achieved with a "traditional" 3D CG approach and why it might not be the best path to follow in the year 2020 and up.

Linear pipeline and the One Man Crew problem

I touched upon this topic in one of my previous posts.

The "traditional" 3D CG-animated movie production pipeline is quite complicated. Not taking pre-production and animation/modeling/shading stages into consideration, it's a well-known fact that an A-grade animated film treats every camera angle as a "shot" and these shots differ a lot in requirements. Most of the time character and environment maps and even rigs would need to be tailored specifically for each one of those.

tintin-shot-to-shot-differencesShot-to-shot character shading differences in the same scene in The Adventures of Tintin (2011)

For example if a shot features a close-up of a character's face there is no need to subdivide the character's body each frame and feed it to the renderer, but it also means the facial rig needs to have more controls as well as the face probably requires an additional triangle or two and a couple of extra animation controls/blendshapes as well as displacement/normal maps for wrinkles and such.

But the worst thing is that the traditional pipeline is inherently linear.

Digital Production Pipeline

Thus you will only see pretty production-quality level images very late into the production process. Especially if you are relying on path-tracing rendering engines and lack computing power to be able to massively render out hundreds of frames. I mean, we are talking about an animated feature that runs at 24 frames per second. For a short 8-plus-minute film this translates into over 12 thousand still frames. And those aren't your straight-out-of-the-renderer beauty pictures. Each final frame is a composite of several separate render passes as well as special effects and other elements sprinkled on top.

Now imagine that at a later stage of the production you or the director decides to make adjustments. Well, shit. All of those comps you rendered out and polished in AE or Nuke? Worthless. Update your scenes, re-bake your simulations and render, render, render those passes all over again. Then comp. Then post.

Sounds fun, no?

You can imagine how much time it would take one illiterate amateur to plan and carry out all of the shots in such a manner. It would be just silly to attempt such a feat.

Therefore, the bar of what I consider acceptable given the resources available at my disposal keeps getting...

Lower.

There! I finally said it! It's called reality check, okay? It's a good thing. Admitting you have a problem is the first step towards a solution, right?

Right!?..

welcome-to-aa Oups, wrong picture

All is not lost and it's certainly not the time to give up.

Am I still going to make use of Blend Shapes to improve facial animation? Absolutely, since animation is the most important aspect of any animated film.

But am I going to do realistic fluid simulation for large bodies of water (ocean and ocean shore in my case)? No. Not any more. I'll settle for procedural Tessendorf Waves. Like in this RND I did some time ago:

Will I go over-the-top with cloth simulation for characters in all scenes? Nope. It's surprising how often you can get away with skinned or bone-rigged clothes instead of actually simulating those or even make use of real-time solvers on mid-poly meshes without even caching the results... But now I'm getting a bit ahead of myself...

Luckily, there is a way to introduce the "fun" factor back into the process!

And the contemporary off-the-shelf game engines may provide a solution.

game-engines

Maya 2016 ships with Arnold now. Good bye, Mental Ray

SIGGRAPH 2016 is full of surprises.

Autodesk announced that with Maya 2016 they decided to ditch Mental Ray and replace it with Arnold. I gotta say... Of all things AD did over the years...

This is kinda cool.

Arnold render in Maya 2016

Still, AD being AD, batch rendering will cost you extra.

Luckily, interactive rendering (that is rendering from Maya) doesn't require a separate Arnold license. This means that Maya now comes with probably the most renowned production rendering solution (albeit CPU-only) by default.

rendering with Arnold

Not bad... Not bad at all, AD.

Create and render OpenVDB volume grids with Softimage and Redshift renderer

Note: a detailed tutorial is now available.

Since current version of Redshift requires OpenVDB-compliant voxel grids for its volume rendering, we need to somehow generate and export .vdb files from Softimage (not everybody has access to Houdini, you know).

OpenVDB logo

Thanks to user Mr.Core from SI Community we have a set of ICE-compounds to do just that.

Mr.Core provided compounds to voxelize particles and geometry as well as perform actions on .vdb grids and polygonize them.

Here's an example on how to make a vdb cloud in Softimage, export it to a file and render with Redshift: