Rhythm & Hues visual effects supervisor Bill Westenhofer talks to vfxblog about the challenges of The Chronicles of Narnia: The Lion, The Witch And The Wardrobe.
Interview by Ian Failes
‘Narnia’ sounds like a character animator’s dream. Was there some competition at Rhythm to get on this show?
Narnia was certainly a dream job for me as the story was one that I had cherished since childhood. Of course that is always a double edged sword, because you end up putting yourself under even more pressure to live up to your own high ideals of how it should be. Fortunately, we were working for a director like Andrew Adamson whose creative vision and high standards for the visual effects work were already a great target to strive for.
There was a great deal of competition for the visual effects work on Narnia. The production auditioned the studios by commissioning an animation test of Mr. Beaver. We were all handed a piece of dialogue, a collection of stills for backdrops and told to make something of it. It was interesting to see the results in the end as each facility brought different strengths to the table. Ultimately the test landed us the work. Ironically, after the shot count grew the workload was split amongst ourselves, Sony Imageworks, and Industrial Light & Magic, and our part did not include the beaver.
What was involved in your role?
My role as Visual Effects Supervisor for Rhythm & Hues included both the onset and post-production supervision of our portion of the work which included Aslan the lion, and the large battle sequences including some 40 hero characters and armies simulated with the artificial intelligence package Massive. In all we produced 380 visual effects shots for the film.
I actually started with the production before they hired their overall supervisor Dean Wright, so I also helped with the technical aspects of creature design being done at WETA Workshop, and developed methodologies to accomplish characters like fauns and centaurs which would often have live action upper bodies and CGI creature halves.
What kind of planning and preparation did Rhythm undertake in terms of animation for ‘Narnia’. Did you produce any pre-viz or motion study tests? What real world references did you look to?
We did an extensive amount of referential resource collection at the start. We were fortunate as well in that the production’s animal training facility, Gentle Junge, in Fraiser Park, CA allowed us to spend a day in a cage with a lion, cheetah, leopard, eagle, bear, wolves, and several other creatures. We brought a Hi-Def motion-picture camera with us along with a host of still cameras to get fantastic close-ups of the animals we had to build. We were able to use this reference directly as we built and prelit the characters – doing side by side matches of specific walk cycles and cross dissolves of the still frames.
To deal with mythical creations we did a number of animation studies with early rigs. I had done a survey of fauns and centaurs from other films and worked with Andrew to come up with a list of things that worked and didn’t work from those efforts. Using this as a guide, we worked out ideas on how an idealized centaur would move so that we could work out how we needed the live action humans shot on location to move in support of the to-be-added lower halves. We did several video tests with the prosthetics vendor KNB to see how these ideas panned out when the digital portions were added.
I also attended the tech location scouts in New Zealand. This gave us an idea of the terrain we would be dealing with, including tall grass that we would need to integrate our characters into. At that time we also had Paul Maurice’s Lidar Services, LIDAR the entire battlefield, Aslan’s Camp, and a few other small locations so that we’d have digital terrain to run our Massive armies over.
How closely did you refer to the previz or how far did you have to move away from it?
The previz for the film was handled by the production, under the direction of previz supervisor Rpin Suwannath. The previz for the battle had been in development long before we were even brought on the film. Ultimately, his team would previz the vast majority of the movie that would contain visual effects.
I would say, to a great degree, the film matches pretty closely to the scenes as they were prevized. This is largely due to the fact that Andrew spent a lot of time working out his shots in the previz stage to make sure it was faithful to his vision of a scene. Drama sequences tended to drift from the previz a bit more, but this is to be expected as dialogue changes or performance moments are created that merit an adjustment to the cut. The battle sequence, however, particularly the warm up and charge are extremely close in the final cut of the film. The fact that we adhered so closely to it was essential during principle photography as we often had as many as six units filming over a huge expanse of terrain – some units would be literally miles away. Because we all new exactly what Andrew was looking for, it allowed the necessary degree of autonomy to complete the filming on schedule.
What helped you make decisions about what should be motion-captured or key framed?
The use of Massive required the use of a lot of motion-capture. The motion-captured results were stunning, but even if they weren’t it would be nearly impossible for animators to create the hundreds and hundreds of little actions needed by the agents to perform within the package.
Initially we even toyed with the idea of motion capturing animals like a lion, leopard, and so on. Ultimately the success of early key-frame motion studies, and the huge practical difficulties of doing this kept it from happening. All of the big cats, including Aslan, were completely key-framed. Flying creatures like the gryphons and hawks were also key-framed. We were able to motion capture a horse, however. These cycles were used extensively in our animation including the Massive centaurs, and even as a basis for hero animation of both ‘freedom horses’ (the name given to actual – non centaurified horses in Aslan’s army) and the centaur horse bodies.
All of the humanoid characters were captured for Massive and for motion vignettes that could be used on mid-ground hero characters. Our motion capture director, Michelle Ladd worked with Giant Studios to capture all of the creatures. Giant’s system was able to show the capture retargeted onto our rigs in real time which was crucial to gauging the success of reverse-legs like the goat-legs on a faun.
One of the most challenging characters from a mocap standpoint was the centaur. The key to selling a centaur as a being is that the human and horse have to move with a single mind. We did a test where we combined the capture of a horse with that of its rider and the result looked like what you might expect in that the human felt like it was along for the ride as opposed to motivating the horse. The way we accomplished the centaur was to first make selections from our horse capture for the actions we wanted to include in the Massive agent. For each of these, we captured a human that performed stationary in a chair. We were then able to procedurally add the necessary overlapping action of the torso, limbs, and head in both the motion edit stage and within Massive itself. Careful timing of this overlap made the human feel like it was leading the horse which achieved that ‘single mind’ we were after.
How were your creatures modeled and rigged?
All characters were modelled using a combination of Maya and our propriety tool. When available we started with cyberscans of WETA maquettes. For the exotic animals we would either adapt an existing model, or find other sources such as taxidermy forms for the body and detailed areas like the teeth and gums.
Our lead Creature Supervisor, Wil Telford, employed our newly implemented ‘construction kits’ (CKs) to do all of the skeletal rigging for our characters. We had a set of CKs for bipeds and quadrupeds which could be rapidly instanced onto a model in a matter of hours. This not only made the inital rigging process fast, but it insured that every rig was completely consistent in terms of naming conventions, rotation orders, etc. which would allow animators to easily jump from character to character.
The skeletal rigs were very fast, but the soft body deforms still took a bit of time. We used just about every trick in the book to do this, including full muscle systems, blend shapes, compression driven displacements, harmonics and dynamics driven skin and fat layers, and traditional deforms. Each were custom fit to the base model. Some characters like Aslan had a great deal of this complexity proceduralized so that it performed well even without direct animator input. For creatures that were more ‘one-offs’ we would include a similar complexity in the rig, but an animator would have to spend time hand adjusting the results.
With a limit on our rigging resources within our schedule we had to make decisions like this to put the most emphasis on things that would be amortized over large numbers of shots. Aslan’s setup included a unique post animation process that overcame one of the former limitations of muscle systems with regard to the timing of muscle firings. Past systems were steady-state in that the amount of contaction exhibited by a muscle was just the result of the joint angles on a given frame. In reality, muscles fire and begin to bulge in anticipation of a motion and often relax before it completes. The post process was a script that analyzed an animators performance and would correctly fire muscles according to this more realistic timing. Our technical animation staff still had the ability to alter this afterwards, but in the majority of shots, this process produced the final results you see on screen.
Our facial rigs also varied in complexity and proceduralism based on the amount of scenes a creature would be featured in. Aslan’s face was incredibly complex in so far as it was a blend shape system built from an underlying muscle system on top of which was applied another muscle system! But the miraculous thing, was that this complexity was hidden from the animators and did its magic behind the scenes. A relatively small number of controls produced his facial performance.
The process of building his face started from reference. We poured over the vast collection of lion photographs we had and found examples in which the lion appeared to portray the expressions we wanted to include in our library. Starting this way guaranteed anatomical authenticity in the poses. A given facial pose was free sculpted by the modelling staff. This was fed into a full muscle rig which analyzed key facial landmarks to determine which muscles would have needed to fire to achieve that expression. The final blend shape pose was then reconstructed by the muscle system. This added subtle skin motion and interrelationships that were not sculpted in by the modellers. The final facial rig was a blend-shape system, but even with that we added back a subset of the final musculature to help interpolate correctly between poses.
As a final test for the rig before shot production, we chose frames of Gregory Peck from To Kill a Mockingbird as a performance target. At the time, Liam Neeson had not been cast, so Gregory’s character Atticus Finch seemed to have the right sublime quality we were after. We made sure the rig was capable of reproducing expressions for Aslan that mirrored those of the human actor.
What approach did you take to the final animation? Were your animators given individual characters to work on?
Our animation director, Richie Baneham, and our animation supervisors, Erik DeBoer, Matt Logue and John Goodman, were responsible for assigning animators to shots and characters within the shots. While we always like to keep animators on the same characters throughout a movie, the shortness of the schedule, and the order of shot turn-overs from production often make this impossible. To keep the pipeline moving, we must move animators from character to character, which is why innovations like the CKs were so important. That said, we did manage to keep animators who found a particular knack for Aslan facial performances on the majority of those shots, and tried to keep others who were very successful at fast action and impacts working on the battle scenes.
What tools did you use to animate the characters? How ‘final’ were you able to take the shots in animation?
All of our animation was conducted on our in-house animation package, Voodoo. Our animation work proceeded in four steps:
– Blocking: this would place the characters in frame and would employ simple animation, and/or motion cycles to get a sense of the motion;
– Rough Animation: this would be actual animation crafted for a scene to demonstrate our attempt to portray what Andrew was after;
– Animation Approval: this would include all of the detail necessary for Andrew to sign off on the performance;
– Cleanup: typically minor adjustments to foot placements for lighting, or tweaking a pose for the muscle systems.
After animation was complete, our technical animation staff would take over to polish the muscle systems, clean-up deforms, add cloth and hair simulations, and so on. Our character animators would often handle the sympathetic animation of armour, etc, but this too would optionally be passed to tech animation for certain shots.
Can you talk about the challenges of fur and feathers and any other dynamics or simulation issues you had?
Rhythm & Hues has been working with furry characters for a number of years now and as a result our hair tools are fairly robust. Still for this production, the number of characters that had hair and the realism required would prove quite challenging. Aslan’s lead pre-lighter, Greg Steele, utilized over 15 different hair types in the mane alone each with a different density, colour, transparency, and degree of curliness. We also had to be able to have multiple ‘combs’ for each character to allow variations across, for example, the number of different minotaurs portrayed on screen.
One of the biggest challenges for our hair simulations was the wind that was ever present in New Zealand. While on set, I would joking mumble that the title should be [???]. Our digital characters then, had to match the effect that was evident in the live action plates. Our software programmers developed two layers of wind for the tech animation staff. The first, simply called ‘Dynamic Wind’, handled the macro motion on the guide hairs of a strong breeze and caused the hair to wrap around contours of the body and also maintained a degree of collision detection to maintain overall volume. The second layer, dubbed ‘Pelt Wind’ for the ‘pelt system’ of dealing with hair types, would move individual hairs like whispy edges in a light breeze. The two could be combined and animated with noise forces to simulate the various conditions in which our CGI creatures had to be placed.
On the topic of feathers, we did have a challenge in the gryphon. The gryphon is a combination eagle and lion and flies by way of huge eagle wings. Our shots had to portray these feathers blowing in the wind and had to show them as the wings fold. Amazingly, the complex folding action of the flight feathers during wing fold was accomplished through careful rigging and procedural action. Some hand cleanup was required, but they performed very well out of the box. Animator controls handled the wind effects on the major flight feathers, while the rest were handled by our fur package and the same wind controls that dealt with Aslan’s mane.
How was Massive was used by Rhythm for the show?
We used Massive to handle the mid and background characters in both Aslan’s and the White Witch’s armies. We would often have as many as 30,000 creatures in frame at one time. Combined across the 130 shots that employed Massive, the program handled the animation for over 450,000 characters according to our Massive supervisor Dan Smiczek.
It took about a year and a half to process the motion capture, build Massive brains, and integrate the tool into our pipeline. We had to develop a way to efficiently handle the volume of geometry that would be passed to the renderer. To do this we built multiple levels of detail and had to build skeletal rigs for each that would work consistently with the mocap data. After this lengthy setup, however, Massive was able to populate a shot with realistically behaving agents in as little as a day to as long as a few weeks of time. Changes to performances could be turned around very quickly and those individuals who chose to remain ‘non-conformist’ could be culled from the crowd with ease.
I was actually surprised by how close we could bring Massive agents. In Narnia, we were the first team to put fur on Massive characters. This coupled by the quality of the animation output from the package brought them into the mid ground of a frame. A packed battle scene would feature, at most, about 20 hero animated characters – the rest would be Massive agents.
How were your visual effects shots reviewed by the vfx supes and director?
We met with Andrew and Dean several times a week. We used both video conferencing and in person visits for animation which was supplied via QuickTime files uploaded onto each side’s respective Avid. Film was screened either at Rhythm & Hues or at the production’s screening facility depending on Andrew’s location. Some lighting work was reviewed via hi-res QuickTimes on a Mac, but the vast majority of lighting decisions were made on film.
Did you look to the work of the other vendors while completing your shots?
Not all that often. We did do a lot of sharing up front: we supplied ILM with all of our models, textures, prelighting turn-tables, and some motion-capture data for them to build their libraries. After that, it was really Dean and Andrew’s responsibility to insure consistency within shots. This was possible, because the designs for everything were pretty well established when ILM was brought into the picture.
One notable exception was a series of shots featuring Aslan pinning down one of Sony’s wolves. This did require careful back and forth on both sides as we animated each separately and combined the two at the various stages (blocking, rough animation, final animation). For a given shot, we decided a priori which character would be the lead and which would act sympathetic to that. This allowed one side to establish the blocking that the other could play off too. Interestingly, these shots matured later in the production so neither of us saw the other’s finished render under a few weeks before the end of production.
Is there one particular shot or sequence you could break down, and talk about the elements that made up the shot?
One series of shots that encompass all of the challenges we faced involved the ones of Peter on his unicorn in front of his army waiting to strike. In the shot, Peter, his unicorn, Oreius, and the front row of centaurs were all live action. The gryphon was a full CGI hero character, and the entire rest of the army was handled by Massive.
– The shot was filmed on location at the battlefield. All of the centaur performers wore green tights and were placed on 14 inch platforms so they would be at the right heights for the horse bodies to be added underneath.
– The next step is to matchmove the camera. We had LIDAR of the location, so this task fit the ground to the same terrain in frame. Having a 3D representation of the ground and rock formations was crucial to allow proper placement of the Massive characters.
– Next we have to track the centaur actors. These were hand fit to all of the live action upperbodies. Doing this allowed the horse bodies to be attached and pick up any rocking or swaying that the actors performed.
– Our hero animator for the shot then began animating the gryphon. In one of the shots he has to fly-in, land, furl up his wings and deliver a line of dialogue. The animator also animated the wind blowing through the flight feathers to match the strong wind visible in the plate.
– Our leg animation team then works on all of the horse bodies. It is a sort of reverse-engineering process, where they must work up horse animation that supports what the actors are doing. Adding touches like the occasional foot twitch and so on keep the lower half alive. Careful attention must be paid through all this to insure that the blend area remains consistent. They have controls to select the amount to blend the hip rotation of the actor into the front of the horse, for example.
– Massive simulations were then setup and run. A great deal of time was spent blocking out the location of various formations within the army – for example, 2 lines of centaurs, followed by squads of fauns & satyrs, followed by more sporadic mixtures of all the various creatures in Aslan’s army. Even the first pass of Massive was very successful – foreground fauns have believable agitation and shift from foot to foot. More time was then spent adding background characters that are moving about as if to find their final positions, and so on.
– Tech Anim now steps in to animate the chainmail hanging down off the cenatur’s bodies. The wind also has to blow across the gryphon’s body hair and fine wing feathers. Tech Anim also adds motion to the various battle standards and flags being held by the Massive agents.
– Finally the characters are lit and rendered, and compositing adds details like ground shadows, dust from the landing gryphon and so on.
[VFXblog]