CGI Сoffee


Revelations of a Computer Graphics Apprentice

EA Games FIFA 19. A Failed Paid Reviews Extravaganza and Why You Should Never Trust Popular Game and Entertainment Media

This is a very quick post regarding modern PR practices and how every single medium to large-scale video game and entertainment media website cannot be trusted never-ever-never. Ever.

Disclaimer: I don't play sports games, especially soccer games. I just don't care about football. This post is exclusively about dirty PR practices employed by commercial companies and PR agencies they work with.

The Case

On September, 25-th EA Games released a new title in their well-known sports games series named "FIFA 19". I won't go into details what a... Controversial product it ended up being. If you're interested, please refer to the following videos by YongYea: one, two, three.

I'm here to talk about how the game scored at one of the better known review sites out there — Metacritic.

Here's the FIFA 19 for PS4 page at Metacritic in its current state (Oct, 5-th):

metacritic-page

Ports of the game for PC, XBox 360 and other platforms do not fare much better and don't differ that much so I'll just focus of the platform with the most reviews and critic scores — PS4.

Note the sharp disparity between "Critic" and "User" scores for the product.

EA is a big Commercial Entity and its CEO knows the company needs to maintain good sales of its products. The most common way to do so when it comes to review websites and social networks have always been... paid reviews. It doesn't come as a shock to anyone: everyone is aware that at least a 15-30% of all reviews on the Internet should be taken with a grain of salt, so to speak.

The Reviews

As a web project manager working for a public commercial company I have an understanding of how you should approach paid reviews, how one should create new virtual users and make the best use of those accounts as well as existing ones.

And I can definitely tell you this is not how you do it.

Reality Check. What Is NVIDIA RTX Technology? What Is DirectX DXR? Here's What They Can and Cannot Do

NVIDIA and its partners, as well as AAA-developers and game engine gurus like Epic Games, keep throwing their impressive demos at us at an accelerating rate.

These feature the recently announced real-time ray tracing tool-set of Microsoft DirectX 12 as well as the (claimed) performance benefits proposed by NVIDIA's proprietary RTX technology available in their Volta GPU lineup, which in theory should give the developers new tools for achieving never before seen realism in games and real-time visual applications.

There's a demo by the Epic team I found particularly impressive:

Looking at these beautiful images one can expect NVIDIA RTX and DirectX DXR to do more than they are actually capable of. Some might even think that the time has come when we can ray trace the whole scene in real-time and say good bye to the good old rasterization.

There's an excellent article available at PC Perspective you should definitely check out if you're interested in the current state of the technology and the relationship between Microsoft DirectX Raytracing and NVIDIA RTX, which without any explanation can be quite confusing, seeing how NVIDIA heavily focuses on the native hardware-accelerated tech which RTX is, whist Microsoft stresses out that DirecX DXR is an extension of an existing DX tool-set and compatible with all of the future certified DX12-capable graphics cards (since the world of computer graphics doesn't revolve solely around NVIDIA and its products, you know).

So here I am to quickly summarize what RTX and DXR are really capable of at the moment of writing and what they are good (and not so good) for.

NVIDIA Volta + Microsoft DX12 = Real-time ray-tracing. Finally?

Whenever I hear the term "real-time ray-tracing", I immediately think of some of the earlier RTRT experiments done by Ray Tracey and OTOY Brigade. You know, those impressive, yet noisy and not quite real-time demos with mirror-only reflections and lots a lots of convolving noisy rendered frames.

Like this one:

I wouldn't dream of seeing something actually ray-traced in real-time at 30fps without any noise in the upcoming 5-10 years. Little did I know, NVIDIA and Microsoft had the same idea and put their best minds to the task.

Results? Well...

Delivered 100%.

This demo, developed by EA (believe it or not) is running on NVIDIA's newest lineup of VOLTA GPUs, which means that VOLTA is also on the way! Yay! NVIDIA RTX tech sure looks promising.

Can you imagine what will happen to offline CUDA ray-tracers following this announcement? Hopefully their devs will be able to make this amazing tech a part of the rendering pipeline ASAP. Otherwise, C'est la vie: you've been REKT by a real-time ray-tracing solution.

Just kidding. We gotta test this thing out first and only then will be able to tell whether we've been led to believe in yet another fairy tale or that you need like 8 VOLTAS to run this demo which would be a let down.

Check out the announcement.

A Lesson in Creativity. Gal*Gun 2

I'm starting to question my life choices...

I mean, I've managed all kinds of projects, produced countless videos, published a small game, decided to work on an animated feature film. All for one thing: to make some noise, get noticed and maybe even make a buck or two on the way. You know, the basics.

Alas, trapped within the confounds of my inflexible mind and obsolete world view I would never be able to come up with something as beautiful and inspiring as this:

Please, take a deep breath. Pause. Then repeat this once again, proud and aloud, and let it sink in:

"SUCK YOUR WAY TO PANTSU PARADISE!"

Feels good man

Exquisite.

This is exactly what you mean when you declare:

To boldly go

This is impressive. You need a special mind-set to come up with something as bizarre as this franchise and bring these games to fruition in the form of a real commercial projects and actually sell them.

But not just sell them. No-no-no! Sell them with a DLC. Yeah, just an innocent tiny $90 DLC. Which, naturally, gives the lucky player the power to zoom in and undress the characters.

Well, naturally.

Gal*Gun Undress and zoom DLCThis is what you call a DLC!

Am I going to play this... game? Doubt it. Am I impressed by the mere fact that such a project exists and sells more or less well? Hell yeah!

Basic instincts, man... That's where it's at. Exploit and prosper. Seems like PQUBE LTD nailed it (he-he, get it?) when it comes to providing a quality product for their target audience. If it sells, it sells. Simple as that.

Screw the film! Forbes 100, here I come!

Gotta code fast!

Vicon's VR multiplayer escape room at GDC 2018 featuring Shōgun 1.2

This year at GDC Vicon will be demonstrating its new Shōgun 1.2 MoCap platform. At its core it's an optical MoCap solution (which is quite different from inertial ones like Perception Neuron and is commonly used in professional production).

Vicon Shogun Live Actors

Vicon's Shogun and Shōgun Live are well known MoCap solutions used by Hollywood filmmakers and AAA-game devs worldwide.

This time the company comes to GDC with a treat: they will be coupling Shōgun with VR headsets to immerse GDC visitors into interactive virtual worlds in what they are calling a "VR multiplayer escape room".

What's even cooler is as far as I can tell, all MoCap data will be directly streamed into Unreal Engine 4. I believe this will make the engine even more popular among game devs, especially those interested in VR applications (sorry, Unity).

Shogun live cameraman

Anyway, if this is something you might be interested in, you can read more about the event at Vicon's official website.

The Rise of the NEW Teenage Mutant Ninja Turtles!

The new TMNT makes me so happy, yay! Splinter is my favorite. Who's yours?

New 2018 ninja Turtles

Unity and Unreal Engine: Real-Time Rendering VS Traditional 3DCG Rendering Approach

(Revised and updated as of June 2020)

Preamble

Before reading any further, please find the time to watch these. I promise, you won't regret it:

Now let's analyze what we just saw and make some important decisions. Let's begin with how all of this could be achieved with a "traditional" 3D CG approach and why it might not be the best path to follow in the year 2020 and up.

Linear pipeline and the One Man Crew problem

I touched upon this topic in one of my previous posts.

The "traditional" 3D CG-animated movie production pipeline is quite complicated. Not taking pre-production and animation/modeling/shading stages into consideration, it's a well-known fact that an A-grade animated film treats every camera angle as a "shot" and these shots differ a lot in requirements. Most of the time character and environment maps and even rigs would need to be tailored specifically for each one of those.

tintin-shot-to-shot-differencesShot-to-shot character shading differences in the same scene in The Adventures of Tintin (2011)

For example if a shot features a close-up of a character's face there is no need to subdivide the character's body each frame and feed it to the renderer, but it also means the facial rig needs to have more controls as well as the face probably requires an additional triangle or two and a couple of extra animation controls/blendshapes as well as displacement/normal maps for wrinkles and such.

But the worst thing is that the traditional pipeline is inherently linear.

Digital Production Pipeline

Thus you will only see pretty production-quality level images very late into the production process. Especially if you are relying on path-tracing rendering engines and lack computing power to be able to massively render out hundreds of frames. I mean, we are talking about an animated feature that runs at 24 frames per second. For a short 8-plus-minute film this translates into over 12 thousand still frames. And those aren't your straight-out-of-the-renderer beauty pictures. Each final frame is a composite of several separate render passes as well as special effects and other elements sprinkled on top.

Now imagine that at a later stage of the production you or the director decides to make adjustments. Well, shit. All of those comps you rendered out and polished in AE or Nuke? Worthless. Update your scenes, re-bake your simulations and render, render, render those passes all over again. Then comp. Then post.

Sounds fun, no?

You can imagine how much time it would take one illiterate amateur to plan and carry out all of the shots in such a manner. It would be just silly to attempt such a feat.

Therefore, the bar of what I consider acceptable given the resources available at my disposal keeps getting...

Lower.

There! I finally said it! It's called reality check, okay? It's a good thing. Admitting you have a problem is the first step towards a solution, right?

Right!?..

welcome-to-aa Oups, wrong picture

All is not lost and it's certainly not the time to give up.

Am I still going to make use of Blend Shapes to improve facial animation? Absolutely, since animation is the most important aspect of any animated film.

But am I going to do realistic fluid simulation for large bodies of water (ocean and ocean shore in my case)? No. Not any more. I'll settle for procedural Tessendorf Waves. Like in this RND I did some time ago:

Will I go over-the-top with cloth simulation for characters in all scenes? Nope. It's surprising how often you can get away with skinned or bone-rigged clothes instead of actually simulating those or even make use of real-time solvers on mid-poly meshes without even caching the results... But now I'm getting a bit ahead of myself...

Luckily, there is a way to introduce the "fun" factor back into the process!

And the contemporary off-the-shelf game engines may provide a solution.

game-engines