NVIDIA and its partners, as well as AAA-developers and game engine gurus like Epic Games, keep throwing their impressive demos at us at an accelerating rate.
These feature the recently announced real-time ray tracing tool-set of Microsoft DirectX 12 as well as the (claimed) performance benefits proposed by NVIDIA's proprietary RTX technology available in their Volta GPU lineup, which in theory should give the developers new tools for achieving never before seen realism in games and real-time visual applications.
There's a demo by the Epic team I found particularly impressive:
Looking at these beautiful images one can expect NVIDIA RTX and DirectX DXR to do more than they are actually capable of. Some might even think that the time has come when we can ray trace the whole scene in real-time and say good bye to the good old rasterization.
There's an excellent article available at PC Perspective you should definitely check out if you're interested in the current state of the technology and the relationship between Microsoft DirectX Raytracing and NVIDIA RTX, which without any explanation can be quite confusing, seeing how NVIDIA heavily focuses on the native hardware-accelerated tech which RTX is, whist Microsoft stresses out that DirecX DXR is an extension of an existing DX tool-set and compatible with all of the future certified DX12-capable graphics cards (since the world of computer graphics doesn't revolve solely around NVIDIA and its products, you know).
So here I am to quickly summarize what RTX and DXR are really capable of at the moment of writing and what they are good (and not so good) for.
Whenever I hear the term "real-time ray-tracing", I immediately think of some of the earlier RTRT experiments done by Ray Tracey and OTOY Brigade. You know, those impressive, yet noisy and not quite real-time demos with mirror-only reflections and lots a lots of convolving noisy rendered frames.
Like this one:
I wouldn't dream of seeing something actually ray-traced in real-time at 30fps without any noise in the upcoming 5-10 years. Little did I know, NVIDIA and Microsoft had the same idea and put their best minds to the task.
This demo, developed by EA (believe it or not) is running on NVIDIA's newest lineup of VOLTA GPUs, which means that VOLTA is also on the way! Yay! NVIDIA RTX tech sure looks promising.
Can you imagine what will happen to offline CUDA ray-tracers following this announcement? Hopefully their devs will be able to make this amazing tech a part of the rendering pipeline ASAP. Otherwise, C'est la vie: you've been REKT by a real-time ray-tracing solution.
Just kidding. We gotta test this thing out first and only then will be able to tell whether we've been led to believe in yet another fairy tale or that you need like 8 VOLTAS to run this demo which would be a let down.
I'm starting to question my life choices...
I mean, I've managed all kinds of projects, produced countless videos, published a small game, decided to work on an animated feature film. All for one thing: to make some noise, get noticed and maybe even make a buck or two on the way. You know, the basics.
Alas, trapped within the confounds of my inflexible mind and obsolete world view I would never be able to come up with something as beautiful and inspiring as this:
Please, take a deep breath. Pause. Then repeat this once again, proud and aloud, and let it sink in:
SUCK YOUR WAY TO PANTSU PARADISE!
This is exactly what you mean when you declare:
This is impressive. You need a special mind-set to come up with something as bizarre as this franchise and bring these games to fruition in the form of a real commercial projects and actually sell them.
But not just sell them. No-no-no! Sell them with a DLC. Yeah, just an innocent tiny $90 DLC. Which, naturally, gives the lucky player the power to zoom in and undress the characters.
Am I going to play this... game? Doubt it. Am I impressed by the mere fact that such a project exists and sells more or less well? Hell yeah!
Basic instincts, man... That's where it's at. Exploit and prosper. Seems like PQUBE LTD nailed it (he-he, get it?) when it comes to providing a quality product for their target audience. If it sells, it sells. Simple as that.
Screw the film! Forbes 100, here I come!
This year at GDC Vicon will be demonstrating its new Shōgun 1.2 MoCap platform. At its core it's an optical MoCap solution (which is quite different from inertial ones like Perception Neuron and is commonly used in professional production).
Vicon's Shogun and Shōgun Live are well known MoCap solutions used by Hollywood filmmakers and AAA-game devs worldwide.
This time the company comes to GDC with a treat: they will be coupling Shōgun with VR headsets to immerse GDC visitors into interactive virtual worlds in what they are calling a "VR multiplayer escape room".
What's even cooler is as far as I can tell, all MoCap data will be directly streamed into Unreal Engine 4. I believe this will make the engine even more popular among game devs, especially those interested in VR applications (sorry, Unity).
Anyway, if this is something you might be interested in, you can read more about the event at Vicon's official website.
The new TMNT makes me so happy, yay! Splinter is my favorite. Who's yours?
Before reading any further, please find the time to watch these. I promise, you won't regret it:
Now let's analyze what we just saw and make some important decisions. Let's begin with how all of this could be achieved with a "traditional" 3D CG approach and why it might not be the best path to follow in the year 2018 and up.
I touched upon this topic in one of my previous posts.
The "traditional" 3D CG-animated movie production pipeline is quite complicated. Not taking pre-production and animation/modeling/shading stages into consideration, it's a well-known fact that an A-grade animated film treats every camera angle as a "shot" and these shots differ a lot in requirements. Most of the time character and environment maps and even rigs would need to be tailored specifically for each one of those.
For example if a shot features a close-up of a character's face there is no need to subdivide the character's body each frame and feed it to the renderer, but it also means the facial rig needs to have more controls as well as the face probably requires an additional triangle or two and a couple of extra animation controls/blendshapes as well as displacement/normal maps for wrinkles and such.
But the worst thing is that the traditional pipeline is inherently linear.
Thus you will only see pretty production-quality level images very late in the production process. Especially if you are relying on path-tracing rendering engines and lack computing power to be able to massively render out hundreds of frames. I mean, we are talking about an animated feature that runs at 24 frames per second. For a short 8-plus-minute film this translates into over 12 thousand still frames. And those aren't your straight-out-of-the-renderer beauty pictures. Each final frame is a composite of several separate render passes as well as special effects and other elements sprinkled on top.
Now imagine that at a later stage of the production you or the director decides to make adjustments. Well, shit. All of those comps you rendered out and polished in AE or Nuke? Worthless. Update your scenes, re-bake your simulations and render, render, render those passes all over again. Then comp. Then post.
Sounds fun, no?
You can imagine how much time it would take one illiterate amateur to plan and carry out all of the shots in such a manner. It would be just silly to attempt such a feat.
Therefore, the bar of what I consider acceptable given the resources available at my disposal keeps getting...
There! I finally said it! It's called reality check, okay? It's a good thing. Admitting you have a problem is the first step towards a solution, right?
All is not lost and it's certainly not the time to give up.
Am I still going to make use of Blend Shapes to improve facial animation? Absolutely, since animation is the most important aspect of any animated film.
But am I going to do realistic fluid simulation for large bodies of water (ocean and ocean shore in my case)? No. Not any more. I'll settle for procedural Tessendorf Waves. Like in this RND I did some time ago:
Will I go over-the-top with cloth simulation for characters in all scenes? Nope. It's surprising how often you can get away with skinned or bone-rigged clothes instead of actually simulating those or even make use of real-time solvers on mid-poly meshes without even caching the results... But now I'm getting a bit ahead of myself...
Luckily, there is a way to introduce the "fun" factor back into the process!
And the contemporary off-the-shelf game engines may provide a solution.
While the year 2018 may have started with a large meltdown for the rest of the world, we, the 3D folk are still getting some great news following my previous blog post. This time it's Effex by Navie (formerly DPit Nature Spirit) - a fast and robust Particles & Fluid Simulation framework for C4D which went free recently. Effex comes with lots of goodies including an excellent FLIP solver I think I'm in love with.
It can do lots of stuff. See for yourselves:
Anyway, just grab a copy from GitHub.
Although the fact that such great products go free makes me uneasy for some reason. Can't stop thinking about the developers. Hopefully they are ok and not doing this due to problems selling the product...