The Animation Workflow Behind God of War's Amazing In-Game Cinematics

Posted July 11th, 2018 by Jim Thacker — Category: Events — 0 Comments

God of War images courtesy of Santa Monica Studio

God of War images courtesy of Santa Monica Studio

Lead artists Erica Pinto and Axel Grossman reveal Santa Monica Studio's cinematic animation workflow on God of War, and their tips for creating better game animation

As the newest game in Sony's long-running action-adventure franchise, God of War is a stunning reboot. The series' fastest-selling title to date reinvents monster-slaying protagonist Kratos as a middle-aged dad – and reinvents the game's visuals for PlayStation 4.

One particular highlight is the game's 100-plus cinematics, designed to blend seamlessly with gameplay. In The Making of God of War with Santa Monica Studio, a free event at Gnomon last month, Lead Animator Erica Pinto and Lead Character Technical Artist Axel Grossman revealed how these stunning sequences were created and provided their tips for creating better game animation.

Browse Gnomon's character rigging and animation courses

A staging diagram for one of God of War's cinematics, showing the motion of the camera and actors

Live-action reference

Before motion capture could begin, the animation team worked with Director of Photography Dori Arazi to plan the shoots, blocking out character positions and camera moves. of Often this included small thumbnail storyboards or staging diagrams of the kind shown above.

“If this looks like a football play, or a dance routine, it kind of is,” says Pinto. “It takes a lot [of planning and rehearsing] to understand how much time it takes for each character to get into position and where the camera needs to look to catch important details.”

Live reference. Note the animator kneeling to match the eyeline of the dwarf character and the cardboard box standing in for the workbench.

On previous projects, this layout work was mostly done in Maya, but the one-shot camera made the process “time-consuming and clunky.” This time around, the team switched to live-action reference, with Dori in control of the camera and the animators acting out cinematics in Santa Monica Studio's office, using a range of improvised props and sets.

As well as making it possible to do a complete iteration of a cinematic in a day, the new workflow enabled the animators to tap into their own improvisational acting skills. “You get a lot of surprising human interactions that you don't always get when you're sitting at the computer, just riffing off one another,” says Pinto.

Mocap-based previs was used for this cinematic in the witch's house, in order to resolve the positions of key environment objects like the fireplace and cauldron.

Previs

For more complex shots, the team would record rough motion-capture data using Santa Monica Studio's internal Vicon system, again playing the key roles themselves. “It meant we could work with different size characters without having to get onto our knees,” says Pinto. “We could also see props and sets to scale.” Although the resulting data was too rough to be used in final-quality animations, it could be used for in-engine placeholders during development, to make final measurements for sets, and to co-ordinate mocap shoots with the actors.

For some scenes, there was no alternative to conventional 3D previs, either because the scale of the action or the danger of the stunts made live recording impossible. “We couldn't throw [the actors] down pits, [so] we went back to traditional Maya layouts, where you're animating the characters by hand,” says Pinto.

Recording on the motion-capture stage with actors Christopher Judge (Kratos) and Sunny Suljic (Atreus) and Lead Animator Erica Pinto (far right of shot).

Motion capture

Once a previs shot had been approved, it was sent to the motion-capture stage to be recorded at final quality. The animators stayed on set to talk the actors through the action and to help them practice key moments. “You really have to nail the start and end poses [to prevent] discontinuity between gameplay and cinematic,” says Pinto.

To give the actors something to interact with, the team used a mix of physical props and temporary gray mesh assets, particularly for environment objects and creatures. In the image above, you can see Erica Pinto puppeteering the World Serpent shown on the monitor on the left to help the actors on the improvised raft.

An edit inside MotionBuilder. Freya and the camera use the same blend points; Kratos blends slightly behind them, and Atreus (“Son”) is off-screen until later.

Mocap editing in MotionBuilder

Although the game was designed as a single camera shot, the raw mocap data still required editing. In addition to splicing out mistakes or splicing in script rewrites, the editing process enabled animators to refine the timing of a cinematic or assemble shots that were simply too large to be recorded in a single take. “Some of our environments are so large that the characters were traversing [more space than would fit] within our mocap stage, so we had to shoot a couple of different takes and merge it all together,” says Pinto.

Editing was done in MotionBuilder using the Story Tool, with the animators looking for points at which both the position of the camera and the poses of the visible characters could be made to match. “When you're looking for the best take to use, you're looking for the best blends. It's not always the best performance from the actors,” Pinto says. “It's a bonus if your characters are off-screen, [otherwise] you have to go into more refinements using animation layers and constraints.”

Motion-capture data had to be transferred from the MotionBuilder rig (left) to the more complex Maya rig used for final animation.

Transferring data from MotionBuilder to Maya

Although Santa Monica Studio had previously experimented with MotionBuilder, God of War was the first project on which it used it as a core tool: a choice dictated by the complexity and sheer volume of motion-capture data to be recorded. “MotionBuilder is a honed spear,” Axel Grossman, Lead Character Technical Artist, explains. “It's made to do one task very well, which is to play back a lot of mocap data in real time.”

With the rest of the studio's animation pipeline based around Maya, the tech art team wrote a dedicated tool to transfer mocap data between the native rigs used inside MotionBuilder and the more complex in-game character rigs. The image above shows the Maya rig broken down into the core transforms in which the controllers sit, enabling it to be constrained to the MotionBuilder character. The import application, nicknamed MoBuMaya, takes just seconds for a typical shot, with no need to bake simulations or constrain characters manually. You can see the tool in action and Axel's full explanation here.

Cleaning and refining animation by hand

With the mocap data driving the in-game rigs, the animators could begin refining the results by hand, adding motions that weren't captured on-set, like Kratos wrapping bandages around his wrists, and resolving issues like a character going out of sync with their audio or discontinuities when a cinematic transitions back into gameplay.

“IK collisions and physics are usually turned off during cinematics,” says Pinto. “But sometimes we have simulation going on, so if you're teleporting a character behind the camera, you need to make sure the cloth isn't warping in front of the camera.”

Raw scan data (bottom row) and cleaned meshes (top row) corresponding to two FACS action units (left, center) and the two units combined (right).

Creating FACS-based facial rigs

For God of War, Santa Monica Studio switched from the phoneme-based facial animation set-ups used on its PS3 games to the Facial Action Coding System (FACS) popular in modern VFX work, which breaks facial expressions down into 'action units' corresponding to the contraction of individual muscles, or groups of muscles.

As well as recreating the mouth positions corresponding to individual phonemes, FACS set-ups make it possible to layer emotional states like anger or happiness over dialog, helping to retain the nuances of a mocap actor's original performance.

To generate the blendshapes corresponding to each action unit, Santa Monica Studio worked with Sony's Visual Arts Service Group in San Diego, capturing hundreds of photogrammetry-based 3D scans of the actors' heads, remeshing the raw scan data to match the topology of the in-game model, then manually refining the results.

This registration map helped align raw scan data with characters' facial topology.

In order to prevent 'swimming' – the unwanted migration of vertices over the surface of the face during animation – the team used a standard registration map to align scan data with the project topology consistently from one blendshape to the next.

Grossman also built a FACS Manager tool, enabling users to assemble rigs using different permutations of blendshapes, helping to identify combinations of shapes for which corrective sculpting would be required. “Blendshapes are additive, so if you're moving a cheek up across two shapes, you could get over-addition when the shapes activate together,” he points out.

When bilateral shapes – those with left/right variants – were taken into account, each facial rig consisted of 800 to 1,000 blendshapes. Rather than repeat the process manually for every character, the team reused existing data sets, using the relative positions of a set of facial landmarks to determine how to scale the data.

Facial skinning. Vertices on the low-res cage mesh (left) correspond to joints on the in-game geometry (right).

To skin characters' faces without the need to paint weight maps, Santa Monica Studio created a standard low-res cage mesh, generated a joint for each of its vertices, then bound the joints to the cage vertices one to one. The weighting was then transferred using the UV set to proximally transfer the weighting from the low resolution cage mesh to the higher resolution mesh. This gave an interpolated skin weight for the vertices that lie between cage points.

Pose-space deformation in action: the rig applies corrective shapes for Kratos's shoulder muscles based on the values of three joint angles for the clavicle.

Correcting artifacts with pose-space deformation

To adjust key parts of the body, like the clavicles, elbows, wrists, waist and sides of the neck, Santa Monica Studio used pose-space deformation. Grossman describes the workflow as being like Maya's Set Driven Key system, in which one attribute is used to drive the value of another, but “far more powerful,” enabling the team to associate “any number of driver values with an outputted pose.”

As well as moving vertices on the mesh to correct visible artifacts when a character is in an extreme pose, pose-space deformers can be used to move joints on the character rig: a workflow Grossman describes as being well-suited to armor, since it removed the need to create corrective blendshapes for each variant armor piece. “I wanted the armor to feel separate from the body,” he says. “I didn't want it to be something skinned into the base character, and which felt gummy or stretchy.”

You can watch the complete recording of this talk, as well as the sessions on character design and environment art, over on Gnomon's Livestream channel or with the YouTube videos below.

Read more

Watch the full recording of Erica Pinto and Axel Grossman's talk
Check out the Gnomon Twitch channel
Discover Uncharted 4's environment art workflow
Five expert tips to improve your games environment art
Browse Gnomon's character rigging and animation courses


About Gnomon

Founded in 1997, Gnomon has trained thousands of students and professionals for careers in the entertainment industry.Find out more