Home » Technology » Faceshift: real-time in-game facial expressions using Kinect

Faceshift: real-time in-game facial expressions using Kinect

The EPFL Computer Graphics and Geometry Laboratory in Switzerland carries out research in a number of areas including computational symmetry, architectural geometry, shape analysis, geometric art, physics-based animations, and performance capture. It’s that last one we’re interested in today though, as it promises to have a huge impact on video games.

The video above shows EPFL’s latest research into performance capture using the Kinect motion controller. It’s called Faceshift, and provides a simple way to translate your own facial expressions on to a game character in real-time. There’s nothing you have to wear for this to happen, just let a Kinect sensor point at your face and the Faceshift system does the hard work.

You can see the potential Faceshift has just by thinking about any conversation you’ve ever had with other players in a game. Rather than just listening to them talk, their in-game character will be the one talking and performing the correct facial expressions to match that speech.

Faceshift is also of use to game developers, allowing them to easily record the facial expressions of their voice actors and then apply them to game avatars. It’s certainly going to be a much cheaper (and quicker) solution than using a motion capture studio, with the results being good enough for everything except high-quality cutscenes.

There’s no details on pricing yet. EPFL aims to offer a studio version for game developers as well as an SDK to let others build on top of their tech. I would not be surprised to see Microsoft jump on this though, and get it integrated as a standard feature of Kinect . Who wouldn’t want the option of real-time facial animation as you speak in any multiplayer game?

via The Verge


Article source: Article Source


Filed under Technology and tagged , , , , .

Leave a Reply