Now imagine yourself in the shoes of Autodesk management. You have hundreds of thousands of people who want to enter the 3DCG industry. You tell your shareholders: "Hey, let's take away professional and entry-level perpetual Software licenses and give our customers subscription as the only option. Yeah, subscriptions. The worst possible thing for a freelancer. What a great idea! It will sure stand the test of time!"
Now tell me: after a myriad of amazing Blender updates, more and more companies becoming sponsors of the Blender foundation, and now NVIDIA with their RTX + AI denoiser integrated into the Software this well...
Tell me how many more potential Autodesk clients will turn to Blender to never look back?
Don't ask me why my mind immediately went to Autodesk of all things... Probably because I will never forgive them for killing off Softimage and afterwards going subscription-only route.
Ahem, excuse me. Anyway...
Well done, Blender and NVIDIA!
NVIDIA and its partners, as well as AAA-developers and game engine gurus like Epic Games, keep throwing their impressive demos at us at an accelerating rate.
These feature the recently announced real-time ray tracing tool-set of Microsoft DirectX 12 as well as the (claimed) performance benefits proposed by NVIDIA's proprietary RTX technology available in their Volta GPU lineup, which in theory should give the developers new tools for achieving never before seen realism in games and real-time visual applications.
There's a demo by the Epic team I found particularly impressive:
Looking at these beautiful images one can expect NVIDIA RTX and DirectX DXR to do more than they are actually capable of. Some might even think that the time has come when we can ray trace the whole scene in real-time and say good bye to the good old rasterization.
There's an excellent article available at PC Perspective you should definitely check out if you're interested in the current state of the technology and the relationship between Microsoft DirectX Raytracing and NVIDIA RTX, which without any explanation can be quite confusing, seeing how NVIDIA heavily focuses on the native hardware-accelerated tech which RTX is, whist Microsoft stresses out that DirecX DXR is an extension of an existing DX tool-set and compatible with all of the future certified DX12-capable graphics cards (since the world of computer graphics doesn't revolve solely around NVIDIA and its products, you know).
So here I am to quickly summarize what RTX and DXR are really capable of at the moment of writing and what they are good (and not so good) for.
Whenever I hear the term "real-time ray-tracing", I immediately think of some of the earlier RTRT experiments done by Ray Tracey and OTOY Brigade. You know, those impressive, yet noisy and not quite real-time demos with mirror-only reflections and lots a lots of convolving noisy rendered frames.
Like this one:
I wouldn't dream of seeing something actually ray-traced in real-time at 30fps without any noise in the upcoming 5-10 years. Little did I know, NVIDIA and Microsoft had the same idea and put their best minds to the task.
This demo, developed by EA (believe it or not) is running on NVIDIA's newest lineup of VOLTA GPUs, which means that VOLTA is also on the way! Yay! NVIDIA RTX tech sure looks promising.
Can you imagine what will happen to offline CUDA ray-tracers following this announcement? Hopefully their devs will be able to make this amazing tech a part of the rendering pipeline ASAP. Otherwise, C'est la vie: you've been REKT by a real-time ray-tracing solution.
Just kidding. We gotta test this thing out first and only then will be able to tell whether we've been led to believe in yet another fairy tale or that you need like 8 VOLTAS to run this demo which would be a let down.
In case you missed the announcement, here's some truly great news:
"To better serve the game development community we now offer Direct3D 11/12 implementations of the Flex solver in addition to our existing CUDA solver. This allows Flex to run across vendors on all D3D11 class GPUs. Direct3D gives great performance across a wide range of devices and supports the full Flex feature set."
Yes! FleX is now not a CUDA-only physics library! NVIDIA devs have also utilized Async Compute to make it as efficient as possible with D3D.
Check out the GDC talk.
Hopefully we'll start seeing more FleX in the upcoming AAA-titles (or maybe even indie ones?.. Who knows!)
Beware the strawberry milkshake monster!
Simulations are hard.
When it comes to doing simulations on meshes with a finite number of vertices it's relatively easy to achieve desired results. But as soon as you try taming hundreds of thousands or even millions of particles, you're in trouble. Especially when it comes to doing fluid simulations. You need a special kind of solver, a powerful rig or a network of rigs and a lot of patience. It took me by surprise how difficult seemingly trivial simulations can be.
In the animated film I'm working on I will have bodies of water large and small and certain gaseous liquids in the background for increased production value.
If you're a freelancer or a hobbyist on a budget in need to simulate some fluids, off-the-shelf tools available on the market can be a good choice... But there are so many of them that finding out their differences as well as pros and cons is a quest in itself. In this post I'll explore some of the ways an amateur like me can do various fluid-like simulations and what technologies there are to help get the job done.
I'll briefly cover two of perhaps the most well known and renowned fluid sims on the market - Naiad and Realflow.
There was the time when you could only purchase a single Naiad license for 5500$ or rent it quarterly for about 1400$. Luckily those times are over since in 2012 Naiad was sold over to Autodesk and turned into Maya Bifrost. So now you can get your hands on Naiad tech within maya for just $185 a month. You can find out more about Bifrost in this blog post at Digitaltutors. It's a powerful FLIP solver (more on this method below) and well integrated into Maya too with GPU caching and an ability to playback tens of thousands or even millions of particles in real-time directly within the DCC as well as a variety of tools for artistic direction of your simulations.
Then there's Realflow, which comes with several solvers for you to choose (SPH, PBF, HYBRIDO) and with its Dyverso particle solver (the one which uses PBF) gives you the ability to simulate on CPU or GPU, the latter using OpenCL for computations. You can read more about Realflow's solvers here. Overall, Realflow isn't terribly slow and well scalable when you give it lots of cores to work with, but as soon as you realize your hardware limitations and the fact that the cheapest single-seat license with the C4D integration costs over 750 bucks you start looking for other solutions.
I won't spend too much time on different types of solvers available on the market, only mention some of them for the sake of argument. There's an excellent (albeit slightly dated) article on the subject at fxguide explaining them in detail if you're interested in finding out more.
This is the first post demonstrating what NVIDIA PhysX FleX is capable of when it comes to high-quality simulations. I'm planning to show how it can be used for all kinds of simulations with the upcoming blog posts. Also a cool demonstration video below.
FleX is a particle based simulation framework developed by NVIDIA for real-time visual effects. The idea is the following: instead of a having a bunch of solvers for each type of a body (rigid, soft, fluid, cloth e.t.c.) why not create a unified solver based on the concept of using particles (or “molecules” if you prefer) to represent the bodies? Then, make this solver work on modern GPUs to deliver unprecedented simulation speed and you can actually use the result for real-time simulations in games or interactive presentations.
Now, we all know what “real-time performance” means when it comes to the “offline” CGI... ;)
Out of curiosity decided to quickly compare the cost and rendering performance of several CUDA-capable GPUs.
Scores taken from the official OctaneBench results database. Prices - from local online merchants in my area (TAX, VAT included) so your mileage may vary. I used my GTX 970 results as reference. The fact that it's one of the most popular enthusiast-grade GPUs on the market also helps.
I didn't account for the power consumption because electricity rates vary a lot from area to area so if you're planning to run your render farm 24/7 (which is not recommended) you can look into it yourself.
(Used) NVIDIA GTX 780 Ti is a clear winner here. So if you're thinking about building a render farm to use with Octane or Redshift, check it out, it's a bargain.
As soon as GTX 1080 CUDA-compute/rendering results become available I will add them to the chart. Don't expect a miracle though. Preliminary benchmark results from various forums including Redshift and Lexmark show that GTX 1080 is pretty much just a tad faster than 980 Ti and Titan, and in some heavy scenes it can actually be slower than the two! Hope it's just a driver problem and we'll see at least 20-25% improvement over the previous flagship models as promised by NVIDIA.
Otherwise it will be a grave disappointment.