Production


CGI coffee

Production blogs


Perception Neuron Inertial Motion Capture vs Optical Mocap Systems And The First Production Motion Capture Session Experience

Well, the first motion capture session is over and I finally managed to find time to write a post about this exciting experience. The video of the process is still in progress and I will probably be able to post it after finalizing the 4-th and final redaction of the previz.

Or not, since it's getting more and more difficult to find time for logging. Go figure.

Armen mocap session experience

Hence today I'll mostly talk nonsense about the differences between optical and inertial motion capture systems and what pros and cons the latter can have in comparison with traditional optical systems like Optitrack. It's something I had to study before investing into the full inertial Neuron MoCap kit to make sure I'd get the most bang for buck when recording action sources for the film.

Optical tracking systems

Pretty sure you're familiar with those "classic" motion capture markers used by cameras and specialized software to track each marker's spacial data for transferring calculated translation onto 3D objects or skeletons. If not, here's an excellent article at Engadget to get you up to speed.

andy serkis mocap actorLooking good, Mr. Serkis!

Among optical MoCap solutions Optitrack is arguably one of the better known ones. Guys at OT aim at providing the most affordable optical MoCap cameras and gear. Here's the simplest set-up one would need to capture motion using an optical MoCap system:

Getting ready for the first production MoCap session and some motion capture examples

I've been quite busy playing with the Perception Neuron 32-sensor Motion Capture kit recently. Long story short, it's an amazing affordable inertial full-body MoCap solution which can produce seriously impressive captures (some amateur examples below). By affordable I mean really affordable, since the only rival that comes to mind is the Xsens MVN suit which has a starting price of about $12,000. You can now clearly understand why I was so excited to receive my very own complete inertial MoCap suit for a fraction of the price of Xsens.

perception neuron 32 mocap kit

I received it about a week ago (not long after I reviewed the 6-neuron Lite kit) and still can't stop wondering how NOITOM team were able to achieve such level of performance with such democratic pricing.

perception neuron 32 mocap kit

The kit comes with everything you'll need to start capturing right away (apart from a 2-amp USB powerbank you will need to do captures over WiFi).

perception neuron 32 mocap kit

Everything is neatly packed and grouped by the body part. Cool thing is that this is a universal MoCap system. That is, you can only equip certain parts and connect them to the hub and capture, in case you don't need the data from all 32 neurons.

perception neuron 32 mocap kit The magnificent neurons

I've also found out that the neurons' cases (not storage cases, actual sensors) are made of aluminum, so they are very sturdy which helps, since you need to pop them in and out every time you put on the suit and take it off respectively.

perception neuron 32 mocap suit Does anyone else get the "Minority report" vibe?

I believe the gloves are the highlight of the system:

perception neuron 32 mocap gloves Perception Neuron MoCap gloves

Being able to capture hands is really something! Imagine animating all those digits frame by frame! Granted, captured data is not production-ready off the bat, but it's stable enough you can get to cleaning right away and save tens, no, hundreds of hours on animation!

This saturday I will be doing my very first motion capture session. To satisfy your curiosity while I record and edit the MoCap session video, I quickly put together a couple of short clips I produced while testing the kit.

In the video:

  1. The very first attempts at capturing and re-targeting raw motion capture data (hence shaky) without any filtering whatsoever.
  2. Demonstration of how full-body capture with hands and fingers can help tell a story in real-time.
  3. Finished re-targeting rig and a nuanced finger capture. One-click filtering only.
  4. Just some random rendered scene.

Note: one of those will probably end up as a very short animation I will do before the main 8-minute film to practice every aspect of a feature production. Try to guess which one it is. =)

Well done, NOITOM, really, really well done... Thank you for making top-quality affordable MoCap available for the masses!

happy perception neuron awesome face

Thank you for reading and stay tuned for more MoCap related posts.

Film production and the upcoming blog post series

As promised, I will do my best to document each and every step of the process of the short animated film production (for archival purposes of course, for no one should ever take some weekend scientist's ramblings seriously).

Therefore, I'm starting a series of blog posts under the "Production" category I will gradually fill with new articles along the way.

Preliminaries

Film production is not a new experience for me. I've produced and directed several short films and a couple of music videos over the years with my trusty line of Canon EOS cameras, starting with the very first entry-level EOS 550D capable of recording full HD 24p video.

Canon EOS 550D camera

It has long been superseded by a series of upgrades and as of today - with EOS 750D with a Cinestyle Profile and the Magic Lantern firmware hack.

Kate Steadycam recording

I have a bit of practice working the cameras including rentals such as RED and Black Magic, the gear, all kinds of lenses, some steady-cams, cranes, mics and such.

Olga LS steadycam snapshot

Chuplin stories

Motherhackers trailer

Daniel Lesden music video

Long story short, I produced a couple of stories, which at some point led me to a series of videos for a client that called for a massive amount of planar and 3D-camera tracking, chroma-keying, rig removal and, finally, an introduction of CG elements into footage. It felt like getting baptized by fire and in the end it was what made me fall in love with VFX and ultimately – 3D CGI.

Now what does this have to do with the topic of pure CG film production?

How and why it's different

In traditional cinema you write the script, plan your shots, gather actors and crew, scout locations, then shoot your takes and edit and post, and edit and post, until you're done. You usually end up with plenty of footage available for editing, if you plan ahead well. This is how I've been doing my films and other videos for a long time.

Animated feature production is vastly different from traditional "in-camera" deal. Even if we're talking movies with a heavy dose of CGI (Transformers, anyone?) it's not quite the same since even in this case you're mostly dealing with already shot footage whereas in CG-only productions there's no such foundation. Everything has to be created from scratch. Duh!

The first CG-only "animation" I've produced up to this day was a trailer for my iOS game Run and Rock-it Kristie:

Here I had to adapt and change the routine a bit, but even then 80% of the footage was taken from three special locations in the game I built specifically for the trailer. So basically the trailer was "shot" with a modified in-game follow-camera rig with plenty of footage available for me to edit afterwards.

Run and Rock-it Kristie trailer scene layout

In a way it was quite close to what I've well gotten used to over the years.

The process

As soon as you decide to go full retard CG, boy of boy, are you in trouble.

Since I'm not experienced enough in the area of animated CG production I'll give a word to Dreamworks and their gang of adorable penguins from the Madagascar and let them describe in detail what it takes to produce an animated film:

Wow that's a lot of production steps...

So in the long run you are free to create your own worlds, creatures and set up stories and shots completely the way you want them to be, the possibilities are limitless. The pay-off is obviously a much more complicated production process which calls for lots of things one often takes for granted when shooting with a camera: people, movement, environments and locations, weather effects and many more objects and phenomena that are either already available for you to capture on film or can be created either in-camera or on set and in post.

Hell, even when it comes to camera work, if you don't have access to the sweet tech James Cameron and Steven Spielberg use when "shooting" their animated features (Avatar, The Adventures of Tintin: The Secret of the Unicorn, respectively) you'll have to animate the camera in your DCC or shoot and track real camera footage for that sweet handheld- or steadycam-look.

James Cameron virtual camera, Spielberg TinTin virtual camera rig

Unless you have this kind of technology, working with cameras in 3D will be a pain the the a$$

Scared yet?

So, all things considered I should probably feel overwhelmed and terrified at what lies ahead. Right?

Nope.

It's a just a Project. And as project manager I thrive on challenge and always see every complication as a chance to learn new skills and follow each and every project to the end. And this one will be no different.

Thank you for reading and stay tuned!

Armen on location

The games are over. It's previz time

All right, gang. As the title suggests, the games are over and I'm officially working on a previz (previsualization) of my upcoming short film.

The way I see it, and judging by how the big studios are doing it, the first thing to do is a very basic 2D-previz which would allow to estimate scene and event timings and check whether the whole story "works". I've already done this using GIMP and then edited all segments together in After Effects.

2D previz of the short film

Coupled with the soundtrack I've composed together with a talented music producer, I finally "saw" the whole thing "outside of my own head". And it works! The story, the tempo, the camera angles - just like I originally imagined them. Now it's time to bring the whole thing to life.

I'm currently in the process of redoing the previz in 3D as can be seen from the screenshot below. This will help polish timings further and see actual animated camera and character movement, which is uber-cool.

XSI previz screenshot

For character rigging I am using the wonderful Exocortex Species tool-set. It will allow me to not only generate MoCap and manual animation-ready rigs from my production geometry with a click of a button, but also quickly create previz-ready characters which is a true blessing.

I will be talking more about Species as I go along.

XSI window with Species rig

And yes, the whole movie, every single scene, animation and 3D special effect will be done with one and only Softimage and rendered out with Redshift. For compositing and editing I will use After Effects CS6 that I was also able to license a couple months before the whole monthly subscription thing came along, which I'm not very fond of. But I digress.

It is on, is what I'm saying.

It. Is. On.