July 31, 2006

Contour: A Novel Technique for Modeling and Capture

by Andrew Stern · , 4:20 pm

I suppose the big game industry news of the day is the cancellation of the yearly E3 tradeshow (who gives a crap, it was just a big marketing fest), but more interesting is the announcement of a new technology for digitally capturing super-high resolution models and motion of actors, called Contour. See articles in the NYTimes and Wall Street Journal. It’s developed by entrepeneur and inventor Steve Perlman (veteran Apple guy, General Magic, WebTV) and to be demoed at this week’s Siggraph in Boston. See and read more at his website, Mova.com.

Instead of placing a mesh of glowing dots all over the actor’s face and filming her from various angles to create a moderately hi-res model and motion capture, Contour mixes fluorescent powder into the actor’s makeup, and captures monochromatic shaded images of the actor’s face while she performs under seemingly normal lighting conditions — made possible with modified strobe-like fluorescent lights. The result is an extremely high resolution digital model, photographed textures and motion capture of the actor’s face. (Animators have to manually add detail to places makeup can’t go, like eyeballs and inside the mouth). Effectively each grain of makeup is like a motion-capture dot, allowing for very very hi-res, and low-cost, capture — “volumetric cinematography”. Brilliant! (literally)

This has immediate applications to filmmaking, as the articles describe, as well as to motion-capture oriented videogames. On purely visual terms, Contour does seem to make major progress towards crossing the uncanny valley, for linear (non-interactive) playback of an actor’s performance.

But, it does nothing to cross what one might call the uncanny valley of AI — how to generate believable interactive behavior. Canned motion capture sequences are of little help when implementing highly dynamic, procedurally animated interactive characters.

9 Responses to “Contour: A Novel Technique for Modeling and Capture”


  1. Breaker Says:

    I think it does help cross the uncanny valley. For non-interactive media the high resolution helps capture detailed facial features that would otherwise be unsettlingly absent, especially skin texture which is impossible to get with reflective dots. The uncanny valley exists in both interactive and non-interactive media after all. See Final Fantasy for a great example of unsettling non-interactive CG.

    For interactive media you have a low cost method of generating lots of expression examples to calibrate a procedural expression model. The many cases can help a computer take the raw positional information and create a dynamic skeletal, flesh, skin model without human input. That model can then be animated procedurally in ways the actor never moved or map those same movements to a different model all together. Eye positioning is really important and missing from this particular technique. I think that eyes can be tracked from raw video with relative ease these days anyhow. As a physical object, eyes are relatively similiar between people, absent a few variables like size and color. Eyelids and apparent eye shape may or may not be picked up properly by the make-up. It’s hard to know where the border is.

    So what’s left? Well the brain behind the mask. That’s the tough part!

    Still it is quite helpful in my mind. Imagine if you could take data from feature length movies, and identify key frames of the actors and actresses expressions using key frame analysis against a generic database. Then you use those identified sequences with generic expression key frames. That would make procedurally generated faces look a lot more natural.

  2. andrew Says:

    You’re right that lots of raw sample data can be of help creating a procedural model, but it’s only a starting point.

    And you’re right that nearer-term facial expression techniques will be a hybrid of keyframe and procedural. Facade’s characters were.

  3. monkeysan Says:

    This is indeed an amazing development. However, since I don’t believe the uncanny valley really exists in the first place, I’m more interested
    in what it makes possible in media rather than whether it cuts across a theoretical construct that has never been validated with convincing evidence.

  4. josh g. Says:

    From what I heard, the evidence was King Kong on the XBox 360.

  5. Louis Dargin Says:

    Softimage’s Face|Robot product works with this. See the details here

  6. Steve Perlman Says:

    Well, whether or not the “Uncanny Valley” is a valid theoretical construct, it is still the case that it is extremely difficult (arguably impractical) to drive a convincingly photoreal human face with a marker-based capture system. Our brains are wired from birth to notice very subtle nuances in faces, and we are much more attuned to detecting slight defects a face than we are in other parts of the body (e.g. the forearm). So, what might be a completely unnoticeable inaccuracy in modeling and animating most of the body, may stand out very noticeably in the face.

    Contour provides a volumetric (i.e. in voxels) capture of a face approaching the same resolution as what you capture with a conventional HD camera (i.e. in pixels) from a single viewpoint. It looks just as photoreal in 3D as it looks in 2D as captured with a conventional camera. So, whether we describe this as being “beyond the Uncanny Valley”, or simply say that “the facial innacuracies are not noticeable”, the end result is the same: You have a tool at your disposal to give you a starting point in 3D facial animation that is photoreal. Then, you can take it from there for whatever creative purpose suits your application.

    — Steve Perlman, president, Mova

  7. andrew Says:

    We’ve discussed the uncanny valley many times on GTxA; it’s definitely an issue, and I’m excited to see Contour making progress towards crossing it, for non-procedural animation anyway.

  8. randform » Blog Archive » the uncanny valley Says:

    […] chologist Prof. Siegfried Frey in his talk on the NMI conference. I read about it first on grandtextauto blog

    po […]

  9. christalnightshade Says:

    Hey not a bad blog… anyway just checking in and seeing that this is about photography and 3D and I see that there is not much to say in the matter here because I don’t know what something valley is for. Though it is easy to change the pixels of a photograph when you have the right scanner or by the use of a friends or a shops… hehe anyway continue.

Powered by WordPress