By TREVOR HOGG
Images courtesy of Universal Pictures and DreamWorks Animation.
By TREVOR HOGG
Images courtesy of Universal Pictures and DreamWorks Animation.
Storyboard of Puss in Boots flying into action.
Rough animation to figure out the environmental details and placement.
The final animation is executed.
Lighting completes the final shot.
After making his debut in Shrek 2, Puss in Boots proved to be so popular that he went from being a supporting character to the lead in the self-titled solo outing released in 2011. Eleven years later, DreamWorks Animation and actor Antonio Bandera revisit the sword-wielding feline outlaw who has used up eight of his nine lives and seeks out the Wishing Star to rectify the situation while pursued by Goldilocks and the Three Bears Crime Family and the Wolf. Puss in Boots: The Last Wish was directed by Joel Crawford, co-directed by Januel Mercado, and features a voice cast of Banderas, Salma Hayek, Florence Pugh, Olivia Colman, Ray Winstone and John Mulaney. The adventure-comedy sequel won Outstanding Effects Simulations in an Animated Feature at the 2023 Visual Effects Society Awards and was nominated for Best Animated Feature Film at the 95th Academy Awards and 2023 BAFTA Awards.
“Effects were a mix of the simulation and particular volumetric passes that are turned into meshes that can then be effectively sculpted out so we can get negative spaces. We always called it the scalping effect, which is that 2D hand-drawn scalping look, so we could get a hard edge along with the softness of the volumetric.”
—Mark Edwards, Visual Effects Supervisor
An interesting challenge was to get the poncho of Wolf to move believably within the painterly aesthetic of the animation.
Top priority was figuring out the look of Puss in Boots. “He is an established iconic character, so we wanted to maintain a lot of that, but there were some elements we wanted to push and update like his costume,” states Visual Effects Supervisor Mark Edwards. “In the first scene of Shrek 2, Puss in Boots had a cape for minute and then threw it away because capes are hard! We wanted to have a lot of fun with the cape and a bigger feather. Then it was trying to figure out the stylization in the fur. For story points, we needed detailed fur-like hair standing on ends, but then we also wanted to simplify it in a lot of ways. A secondary challenge with that was bringing the environment to them, because it’s easy to do one treatment on the characters and another on the environment and they don’t relate, and all of a sudden you have this character that doesn’t fit in the world. The biggest balancing act was trying to make sure we could push far and go abstract, but it needed to have the right detail level to match the characters.”
“One of the technologies that we built was a tool called the Stamp Map. It effectively let us take any sort of image maps, like a brushstroke or texture, and with a point cloud project it and stick it to surfaces or in-effects case volumes. Instead of getting that soft volumetric cloud feel, we could get some texture in there, which was important. The magic secret sauce on top were hand-drawn elements. We were fortunate to lean into some of the tools that The Bad Guys had been writing and using. Our effects team could add extra lines or anything we wanted to get that 2D effect.”
—Mark Edwards, Visual Effects Supervisor
Geometric shapes rather than particles were incorporated to give simulations a painterly aesthetic.
For the grooms of characters, DreamWorks Animation utilized a proprietary tool called Willow. “For the newer characters like Perrito and the Bears we wanted to push those clumps, and how the hair looked in the silhouette edges made it feel more stylized,” Edwards remarks. “A lot of controls were added to be able to add color hues not just to our normal color maps for hair, but also for clumps or sub-clumps. We could get these bigger, broader color blotches that look like paint strokes. We also mapped transparency to our thicker guide curves, and that meant we could get little breakups between a single fifth hair. It felt a lot more like what an illustrator might draw with a couple of gaps. We ended up using that a lot to get broken edges.” When Puss in Boots drinks a cup of espresso, his eyes become extremely expressive. Observes Edwards, “In terms of character rigging, we try to lean into that so animation can have the controls, because that way they can drive the timing of the expressions. Because we wanted to have some asymmetry with the eyes, we could do all kinds of things with the pupils and iris and actually get shapes.”
Three kinds of beards were created for Puss in Boots, with a scruffy one appearing when he is under the care of Mama Luna.
“Nate Wragg [Production Designer] and I would go over every step to make sure it had the right composition and feeling. It was controlled by the artists. That’s something we tried to empower our team, which is to push things on their own because they have great ideas too. The initial challenge was to make them take the step far enough.”
—Mark Edwards, Visual Effects Supervisor
Initially, a small team was put together to develop the looks of the effects. “Effects covers all of these different natural phenomenon, including volumetrics,” Edwards notes. “Effects were a mix of the simulation and particular volumetric passes that are turned into meshes that can then be effectively sculpted out so we can get negative spaces. We always called it the scalping effect, which is that 2D hand-drawn scalping look, so we could get a hard edge along with the softness of the volumetric. One of the technologies that we built was a tool called the Stamp Map. It effectively let us take any sort of image maps, like a brushstroke or texture, and with a point cloud project it and stick it to surfaces or in-effects case volumes. Instead of getting that soft volumetric cloud feel, we could get some texture in there, which was important. The magic secret sauce on top were hand-drawn elements. We were fortunate to lean into some of the tools that The Bad Guys had been writing and using. Our effects team could add extra lines or anything we wanted to get that 2D effect.”
The painterly stylization meant that animators were not limited by naturalism but could push everything further, including the color palette.
There was a constant blending of 3D with 2D, such as with the milk, which was combination of 3D simulation with 2D splash effects. “One of the fun things with the Wolf, in particular, [occurred when] we were trying to get graphic lines in the poncho and lot of the simulations would either be too detailed or soft,” states Edwards. “We developed a shotgun technique that let us to keep the bigger lines, and those could be blended as well. We had some fun ways to get shapes that we liked that weren’t too complex that might feel like a heavy simulation, but it is still underlying the motion.” The seaside town of Del Mar was its own challenge, according to Edwards. “That was a tricky thing to simplify. When you have something complex in the background and want to focus on the characters. We did a lot of lighting compositing to make sure we could abstract detail to get almost way into the painterly realm to simplify composition when needed. The graphic simulations might be easier to deal with, and then with the city we would have to do more treatment to get that same feeling.”
Every time the magic map is touched the surrounding world gets transformed.
Art direction was favored over a procedural methodology. “Nate Wragg [Production Designer] and I would go over every step to make sure it had the right composition and feeling,” remarks Edwards. “It was controlled by the artists. That’s something we tried to empower our team, which is to push things on their own because they have great ideas too. The initial challenge was to make them take the step far enough.” Serving as the MacGuffin is the Wishing Star. “The Wishing Star was a big concern early on because we spend almost all of act three there and, normally, you’re not in one environment for that long. It had to feel complex, evolve and help to drive the story and the action that was happening. The art department did a nice color script of the Wishing Star evolution and our Head of Look Development, Baptiste Van Opstal, was a genius in a lot ways in helping to drive the stylization, and I gave him the Wishing Star to look cool. He ran with it, and with a mix of procedural geometry and surfacing modeling was able to get a nice balance for the Wishing Star so it has depth, color and hue shifting.”
For the newer characters such as the Three Bears, Dreamworks Animation wanted to push how the hair looked in the silhouette edges so it felt more stylized.
“The Wishing Star was a big concern early on because we spend almost all of act three there and, normally, you’re not in one environment for that long. It had to feel complex, evolve and help to drive the story and the action that was happening. The art department did a nice color script of the Wishing Star evolution and our Head of Look Development, Baptiste Van Opstal, was a genius in a lot ways in helping to drive the stylization, and I gave him the Wishing Star to look cool. He ran with it, and with a mix of procedural geometry and surfacing modeling was able to get a nice balance for the Wishing Star so it has depth, color and hue shifting.”
—Mark Edwards, Visual Effects Supervisor
The Last Wish has fun playing with the cape of Puss in Boots and gave him an even bigger feather than the previous incarnation.
The transformation shots with the magic map were tricky. “When the characters touch the map, then the world transforms around them. It’s going from the initial Dark Forest with trees and pink flowers to all of a sudden lava, everything burned and fire all around,” Edwards explains. “There are several transformations, even Goldilocks and the Three Bears, their cabin splitting in half; that destruction was fun. The transformations had to serve the story points and be believable enough, but were magical, so it also had to be fun and fast. The audience needed to get the ideas quickly. We spent a lot of time on those initial transformation shots.” The step animation pipeline was overhauled to allow animators at any time to go into any step animation they wanted. The giant was its own challenge for everybody because he’s a walking environment. We used a lot of our environment tools for sprinkling foliage everywhere, but it had to move. We had to build a bunch of per-frame-basis baking techniques to get all of the data for that to work. Lots of shader work for surfacing and lighting. In Nuke you might have the glow gizmo that gives you a traditional Gaussian blur glow, but we wanted it to look more artistic and have texture. We wrote a special tool for that. Volumetrics had brushstrokes in them. For every facet we could think of, we asked, ‘Can we make it feel like it fits in this fairy-tale world and not just rely on traditional tools?’”
By TREVOR HOGG
Images courtesy of A24.
Six digital artists led by Zak Stoltz, who doubled as a VFX Supervisor and VFX Producer, created nearly 500 shots.
If you ever wanted to imagine what it would be like to do a multiverse narrative on an indie budget of $25 million and go on to earn four times that in the box office – and seven Oscars – just ask Daniel Kwan and Daniel Scheinert, otherwise known as the Daniels, as they have created the biggest hit for A24 that earned 11 Oscar nominations and reignited the career of a former child actor. That movie is Everything Everywhere All at Once, which unlike a Marvel Studios production that has a multitude of visual effects companies and several hundreds of digital artists to produce thousands of shots, a total of six digital artists led by Zak Stoltz, who doubled as a supervisor and producer, created nearly 500 shots. At the heart of the story is a broken mother and daughter relationship, where Evelyn Quan Wang (Michelle Yeoh) must tap into the experiences of her multidimensional selves, guided by her husband Waymond (Ke Huy Quan), to prevent her offspring, Joy (Stephanie Hsu), from destroying the entire multiverse.
Stoltz was supported by Lead Visual Effects Artist and UI designer Ethan Feldbau, as well as Benjamin Brewer, Jeff Desom, Evan Halleck, Kirsten Lepore and Matthew Wauhkonen. In line with the Daniels’ wild imagination is the matter-consuming bagel created by the evil version of Joy known as Jobu Tupaki. “The bagel was difficult because the bagel did a couple of little things in the script, but there were a lot of things that we ended testing out early on and then coming to,” explains Stoltz. “Then it was a process of shaping it from start to finish.
“We all have a background in low-budget music videos and doing effects ourselves, and we’re directors as well. The Daniels wanted to find a way to scale up that process and essentially make a feature illustrating all of the tricks we have developed. Our film has live-action cartoon qualities [like Who Framed Roger Rabbit] to it, so we could get away with a painterly approach of just highlighting drop shadows and doing these topcoat adjustments to our effects work to further meld it into 3D work without needing a render farm.”
—Ethan Feldbau, Lead Visual Effects Artist
One of the hardest visual effects to conceptualize was the matter-consuming bagel created by the evil version of Joy known as Jobu Tupaki.
“We had our own shot tracker system that we set up, and our own principle was to keep everything open,” Stoltz explains. “Everyone had access to everything at all times. We all had special things that we brought, otherwise such a small team wouldn’t have worked because we had to figure out how these puzzle pieces fit together in terms of the way it worked in the scope that we had.” No Nuke was used on the project. “We all had to do the doughnut tutorial for Blender,” laughs Stoltz. “Building out the workflow was a lot of testing that I did with Aashish D’Mello, who was the visual effects assistant editor. Originally, we weren’t meant to be doing a remote workflow, but it ended up that way because of COVID-19. It was figuring out how to build a virtual NAS system and be able to be working off of the same stuff and communicating. I used Airtable to organize everything.”
“We did a test shoot in the Daniels’ garage. Ethan [Feldbau] and I were there, and it was an actual bagel sprayed black on a string. Then we started to experiment with other random configurations that the bagel could be. It was a freeform R&D. After that, we landed on a CG bagel that we made to look as close to the bagel we shot on the string in the garage.”
—Zak Stoltz, VFX Producer/VFX Supervisor
2D solutions were favored. “We all have a background in low-budget music videos and doing effects ourselves, and we’re directors as well,” Feldbau notes. “The Daniels wanted to find a way to scale up that process and essentially make a feature illustrating all of the tricks we have developed. Our film has live-action cartoon qualities [like Who Framed Roger Rabbit] to it, so we could get away with a painterly approach of just highlighting drop shadows and doing these topcoat adjustments to our effects work to further meld it into 3D work without needing a render farm.” The juggling of vegetables by the hibachi chef involved keyframe animation and hand-drawn elements. “The plate was of him pantomiming as if there were all of these vegetables flying around and he was chopping them up,” recalls Desom. “My job was to retroactively figure out what was flying there and fill in the void. It was the first shot that I did! I used the After Effects brush with my mouse, drew a tomato and gave it some highlights, very rough, and then I started animating with a path I drew out and gave it some motion blur. And with a simple bevel and emboss with globalization on it so the highlights would stay in place when it turned, and the same for the shadows. Somehow it worked!”
The hibachi chef scene involved the actor pantomiming juggling and cutting vegetables that were created from keyframe animation and hand-drawn elements.
Working smart within the time constraints was the mandate. “We’re not trying to make the best most photoreal visual effects that you’ve ever seen,” observes Stoltz. “We’re trying to help tell the story.” The Daniels also create visual effects. “They’re not going into the edit without all of these things in mind; that helps a lot because the Daniels are in the room with the editor Paul Rogers, who is a friend of ours.” An example of the directing duo’s expertise with visual effects is a high-speed multiverse jumping shot of Evelyn. “Daniel Kwan would shoot most of those plates on a 4K pocket camera while he was on vacation or visiting anywhere. They were prepping for it months before production began. That was also a pickup shoot where Michelle Yeoh was in Paris. That was a shot that bounced around a few people.”
Footage was captured by frequent Daniels’ collaborator Larkin Seiple. Feldbau states, “Larkin would bend and be comfortable with slight adjustments, or working with us in a way that a DP might otherwise be a little too proud to do.” Editorial turnover was immediate. “We would get early drafts so we could do spotting sessions, but we also got shots that we knew a version of was going to be in the film, and Aashish had access to all of the raw files and would make plates on demand for us,” Stoltz remarks. “We were doing final visual effects shots halfway through the edit. Some of those shots got cut or we didn’t end up doing, but for the most part we made sure that temp effects were usually done on the full plate when we could.” Matte paintings were utilized to create full buildings of the laundromat and IRS building locations, with sets built in the garage of the latter, such as the apartment and Alphaverse RV interior.
Proprietary work was done on the UI that allowed for the animation to have a specific sheen.
“The plate was of [the hibachi chef] pantomiming as if there were all of these vegetables flying around and he was chopping them up. My job was to retroactively figure out what was flying there and fill in the void. It was the first shot that I did! I used the After Effects brush with my mouse, drew a tomato and gave it some highlights, very rough, and then I started animating with a path I drew out and gave it some motion blur. And with a simple bevel and emboss with globalization on it so the highlights would stay in place when it turned, and the same for the shadows. Somehow it worked!”
—Jeff Desom, Visual Effects Artist
Visual aesthetics drove the UI design, which were produced with Adobe Photoshop and After Effects, with the layout done in Illustrator. “For the Alphaverse UI, multiverse maps, phone apps and computer screens, we came up with a working process where Zak would talk to the Daniels about what the basic needs were and then do simple vector line art to do basic timing passes with them, to plot out what had to be there,” Feldbau reveals. “Then it was passed to me to come up with what the aesthetic should be. Fortunately, this film is incredibly anti-style, and the Daniels weren’t interested in following trends. I dove right in to make some of the most hideous retro computer graphics! There is a logic and thought process to it. What would your mother make if she was trying to design a computer program, opposed to the apocalyptic universe where you could only cobble together Z80 computer chips and 8-bit hardware? We did a lot of proprietary work, so all of that sheen happening on the animation was something that I built. We did not use a lot of off-the-shelf plugins on this movie.”
2D solutions were favored, such as utilizing the confetti effect from Adobe After Effects.
“Usually, a director prepares the reference material and says, ‘Copy.’ But this film was allowed to arrive on its own visual style through a logical thought process of figuring out how to communicate the story and what it would be. That’s why Everything Everywhere All at Once has such a unique took to it.”
—Ethan Feldbau, Lead Visual Effects Artist
Unique visual effects styles were established to distinguish the universes from each other. “There was a visual language to the film that we had to adhere to,” Stoltz notes. “The glass-shattering effect and the transformation of Jobu Tupaki is very jump-cutty. The Daniels said early on that they wanted to have an idea of the destructiveness around the bagel. We took a lot of those things that we knew were important to them, and it gave us a structure for the type of visual effects, so there was never a moment where we would look at an effects shot and say, ‘We have to create something out of nothing here.’”
For the verse jumping, Daniel Kwan shot plates on a 4K pocket camera while on vacation, and later on there was a pickup shoot with Michelle Yeoh, who was in Paris.
There was no look book to serve as a visual guide. “Usually, a director prepares the reference material and says, ‘Copy,’” remarks Feldbau. “But this film was allowed to arrive on its own visual style through a logical thought process of figuring out how to communicate the story and what it would be. That’s why Everything Everywhere All at Once has such a unique took to it.”
By OLIVER WEBB
Images courtesy of The Yard VFX.
The Yard VFX added morning smoke over the water to add to the atmospheric grayness of Victorian London.
Following the events of the first film, Enola Holmes 2 follows Enola on her quest to unravel the disappearance of a missing girl, assisted by her close associates and her older brother, Sherlock. Spin VFX and The Yard VFX worked together to provide the majority of the visual effects for the film, with around 550 VFX shots required overall. “Working directly with the movie VFX supervisor, we had to deliver photorealistic period images and scenery, staying true to the spirit and tone of Enola Holmes as laid out in the first installment,” says Edward Taylor, VFX Supervisor for Spin VFX on the film.
“The conversation started with Helen Judd, Mike Ellis and myself around two years ago,” remarks Laurens Ehrmann, Senior VFX Supervisor and Founder of The Yard VFX. “They asked me if The Yard would like to work on Enola Holmes 2. Of course, it was a great opportunity for us,” Ehrmann explains. “They showed me some concept art and explained the project, and we started from there. In the same period, I had some conversations with Aymeric Auté, and I proposed he join us for the show. Then we started sharing our thoughts with Mike, regarding the different environments that we had to craft, especially the outside of the matchstick factory and this big aerial shot over the Thames, where the camera is flying over the dock to reach Enola in the street.”According to Ehrmann, there weren’t a lot of images for the time period. “We tried to build a bunch of references. The main references from Mike were Jack the Ripper and murky London with lots of pollution, as well as Peaky Blinders,” he says.
The Yard VFX relied on images of Victorian London for reference when it came to constructing the concept of the dockyards. A gray sky lingers over the Big Smoke.
“There weren’t many pictures, as it was the end of the 19th century and beginning of the 20th century,” adds Auté, who served as VFX Supervisor. “We tried to go a little further in time, around 1930, as there are more pictures, but we had to take care as it was more industrial in this time period. We tried to get as many references as possible, and we created a mood board with Mike. They sent us what they wanted architecture-wise and details of windows, for example, to really fit with the time period.”
“There weren’t many pictures, as it was the end of the 19th century and beginning of the 20th century. We tried to go a little further in time, around 1930, as there are more pictures, but we had to take care as it was more industrial in this time period. We tried to get as many references as possible, and we created a mood board with Mike. They sent us what they wanted architecture-wise and details of windows, for example, to really fit with the time period.”
—Aymeric Auté, VFX Supervisor, The Yard VFX
The Spin VFX team was meticulous when it came to the final details. “The harsh smoke-filled realities of Victorian England were present but moderately gentrified in order to create an accessible vision in which the story could be presented,” Taylor says. “Period correct was always on our minds as we scoured through Lost London: 1870-1945, London’s Lost Riverscape: A Photographic Panorama, as well as Spitalfieldslife.com, nationalarchives.gov.uk and numerous other websites. The devil was in the details much of the time, as any new photography is littered with television cables, security alarms and cameras, power boxes/junctions, etc. All of these had to be digitally removed in order to sell the turn of the [18th to 19th] century time frame.”
The Yard started with their own concepts for every shot “with a big mood board with lots of pictures we had gathered, just to give them our feelings and thoughts regarding the look of the shot,” Ehrmann details. “After that we spoke about how we would craft and build the shots. For example, for the aerial drone shot, the initial on-location plate over the river with the dock, we were supposed to keep the water as it is, but to ease the connection between our CG environment and the water, we decided to recreate everything, including the water. With the CG water we were able to manage the connection with the wooden docks, to add this kind of morning smoke over the water. We were asked to connect the street with the water with a drain, but an inclined plane, where the water is flowing over the bricks. So, the idea to recreate everything was a good one, as it gave us the ability to connect everything. We added a few extras, which were shot onstage in front of greenscreen, just to add people to the background. A lot of simulation for all of the smoke coming out of the chimneys.”
The Yard VFX created 63 shots for the film.
Overall, The Yard created 63 shots for the film, notes Luhrmann, “18 outside matchstick factory and 45 shots inside the matchstick factory. They shot a room, which we duplicated in depth. Sebastien Fauchère, Comp Supervisor, created a setup to recreate this second room and to be able to craft every shot really fast. We didn’t have to go to the CG render, everything was inside Nuke. It was pretty fast to develop on every shot.”
“The harsh smoke-filled realities of Victorian England were present but moderately gentrified in order to create an accessible vision in which the story could be presented. Period correct was always on our minds as we scoured through Lost London: 1870-1945, London’s Lost Riverscape: A Photographic Panorama as well as Spitalfieldslife.com, nationalarchives.gov.uk and numerous other websites. The devil was in the details much of the time, as any new photography is littered with television cables, security alarms and cameras, power boxes/junctions, etc. All of these had to be digitally removed in order to sell the turn of the [18th to 19th] century time frame.”
—Edward Taylor, VFX Supervisor, Spin VFX
Workers outside the matchstick factory, with the fog over the Thames in the background. The Yard VFX worked hard to recreate old London, paying close attention to the architecture of the period.
The establishing shot proved to be particularly challenging for The Yard to create. “It was a really big environment,” Auté says. “Because it’s a drone shot, we can’t do this on matte painting. We have to really build it in 3D to manage all the parallax to recreate the mood of old London with this kind of move. We started from really wide, and we have to go to almost close-up inside the streets, with the water on the grounds. It was the most challenging in terms of 3D.”
“[F]or the aerial drone shot, the initial on-location plate over the river with the dock, we were supposed to keep the water as it is, but to ease the connection between our CG environment and the water, we decided to recreate everything, including the water. With the CG water we were able to manage the connection with the wooden docks, to add this kind of morning smoke over the water. We were asked to connect the street with the water with a drain, but an inclined plane, where the water is flowing over the bricks. So, the idea to recreate everything was a good one, as it gave us the ability to connect everything.
—Laurens Ehrmann, Senior VFX Supervisor, The Yard VFX
“We started from wider and then went close to the actress, meaning that every single aspect needed to be really defined. We did have time to craft it because the project’s duration was nearly six months. Six months for 63 shots is a lot of time to develop and build all of the assets and do the layouts. It was challenging but not exhausting,” adds Ehrmann.
The Yard VFX decided to recreate everything to ease the connection between the CG environment and the water. Enola leads the crowd of workers along the South Bank.
The trickiest locations to capture for Spin VFX were usually dealing with cameras that were traversing the area. As Taylor explains, “The theatre stage interior and the rooftops and their scope, blending CG with practical, were generally the most challenging. The cameras and created geometry have to be in complete sync to achieve the desired illusion, not to mention lighting and any fluid dynamic elements such as water and smoke. Trying to make seamless transitions for such large and varied vistas was truly a challenge.”
In terms of the most difficult effect to realize, Taylor notes that, “As happens from time to time, creating invisible effects can be perceived as simple, but can often be challenging. The carriage explosion was exceedingly tricky, being nestled in the branches of trees, and blending practical elements alongside CG, as well as the various panoramas of London from a hillside and out a window.”
Millie Bobby Brown as Enola Holmes.
“The theatre stage interior and the rooftops and their scope, blending CG with practical, were generally the most challenging. The cameras and created geometry have to be in complete sync to achieve the desired illusion, not to mention lighting and any fluid dynamic elements such as water and smoke. Trying to make seamless transitions for such large and varied vistas was truly a challenge.”
—Edward Taylor, VFX Supervisor, Spin VFX
“For the whole project, we had to almost set up in parallel, as a lot of shots were inside the factory and they were more 2D centric,” Auté says. “We also had a few shots where we had to build specific assets, such as cranes or the docks building. We started to build these assets in 3D, but at the same time create the layout to place the camera and see where we need to add detail and what we can render in 2D. When we had all this established, we started to refine the texture of the lighting. Or, in some cases we only did matte painting to extend a street. We tried to spread the work between the 3D and 2D team to have everything go together at the end. They have a kind of ownership of the shots, and sometimes it is good to have someone work on one piece. It’s creating a kind of energy, and people are more involved.”
45 of the visual effects shots provided by The Yard VFX were inside the matchstick factory. A room was shot, which The Yard then duplicated in depth.
Concludes Taylor, “Our VFX Producer, Brandy Drew [with Spin VFX], created an environment where daily conversations with artists kept collaboration moving forward and information flowing, no question too small, no concern to go unrecognized. Organized by level of difficulty and availability of resources, the plan was created and the workload balanced. We delivered on time and on budget.”
By TREVOR HOGG
Images courtesy of Paramount Pictures.
Along with replacing the environments, the actual jets had to digitally reskinned to represent the proper aircraft.
Getting declassified in time for the Oscars are the 2,400 visual effects shots found in Top Gun: Maverick, which were the responsibility of Ryan Tudhope and the artists at Method Studios, MPC, Lola VFX and Blind. The goal was to adhere to the imperfections that go along with shooting live-action aerial photography and to produce photorealistic CG that enhanced the storytelling and believability, rather than drew attention to itself. Before any of the scanning information was released of the F-18s to the vendors, the U.S. Navy had to give its approval.
“The L-39 is a much smaller and less capable aircraft than the Su-57, which is a fifth-generation fighter that has thrust vectoring, so it can do these maneuvers that we feature in the film where it literally goes up on its nose and turns around in midair and comes back down. Those things were not possible with anything that we had at our disposal. In those particular situations our animators under the leadership of Marc Chu [Animation Supervisor at Method Studios] pushed to get all of the flaps to do what they were supposed to do and take the shot to the next level, so it was interesting from an audience standpoint.”
—Ryan Tudhope, Visual Effects Supervisor
“That extended to the choices we were making as filmmakers in terms how the jets moved,” Tudhope explains. “The U.S. Navy was there every step of the way. The authenticity, time and effort our team put into getting all of those things perfect were driven by both [director] Joseph Kosinski’s and Tom Cruise’s desire to get all of those details right, and that was accomplished through our friends in the U.S. Navy.”
Armaments like missiles were digitally added so not to hinder the actual performances of the jets.
An area that does not get enough credit is the graphic work produced by Blind. “They were responsible for all of the heads-up displays that you see from the F-18 or Tomcat point of view and a lot of the story-driven graphics that are done throughout the film in the aircraft carrier.”
“There are many sequences that feature four jets, two teams of two, that are flying into the final mission or various training missions. Typically, we shot those with one or two F-18s and added the other F-18s in those formations. It gave us the ability to get more material practically, and since we always had a real jet in there as reference it was a huge help in matching the look and lighting. … [W]e would have a real jet doing a maneuver and add a CG jet doing a similar maneuver following behind, or in the shot where they all come through the valley and the vapor trails are going off and rush under the camera. That was one jet, and we added multiple jets doing the same thing. What is fun about that is you have this perfect thing that you’re trying to match.”
—Ryan Tudhope, Visual Effects Supervisor
Some old-fashioned techniques assisted in choreographing aerial sequences for key story beats. “Through all of that we had these F-18s on sticks,” Tudhope states. “That process was more at the front end where we were trying to translate what the Naval pilots were trying to explain to us about how things would work, as we filmed those in the hallways to try to capture their notes as to what would essentially happen, and go from there. Once we had a sense of Joe’s vision for these sequences, taking in account all of this information we were getting from the pilots, then it became a process of how to execute it.” The imperfections of the aerial photography were retained. “The difficulty of capturing that material is a fingerprint that carries all the way through to the end of the work,” Tudhope adds.
The cockpit of the full-scale Darkstar came off and went onto a gimbal that was put onstage in order to get the desired plasma effect of being in the stratosphere.
A one-to-one replacement was not possible for the aircraft. “The L-39 is a much smaller and less capable aircraft than the Su-57, which is a fifth-generation fighter that has thrust vectoring, so it can do these maneuvers that we feature in the film where it literally goes up on its nose and turns around in midair and comes back down,” Tudhope remarks. “Those things were not possible with anything that we had at our disposal. In those particular situations, our animators under the leadership of Marc Chu [Animation Supervisor at Method Studios] pushed to get all of the flaps to do what they were supposed to do and take the shot to the next level, so it was interesting from an audience standpoint.”
The blue environment of the stratosphere was influenced by aerial photography taken by weather balloons and SR-71 flights.
The imperfections of the canopy glass had to be matched in the CG versions. “There might be a situation where we had an explosion in the distance that wasn’t there and the way that those bright highlights are used through the canopy glass, which has almost like scratches on it, but in a swirling motion; it was important for us to get all of that swirling,” Tudhope explains. “We also added in a ton of armaments across the film. But once you add training munitions or bombs to the wings, it lowers the performance characteristics of the jet, and you want the jets to be doing the full-on performance. We were able to add a lot of those armaments and deal with the continuity across all of the different sequences and take that off of the requirements of the Navy to find all of that stuff for us. But these wings are alive. The wings are constantly fluctuating from the air pressure, and the flaps are moving, and there is complicated lighting moving across.”
“There are also moments where we shot real F-18s doing those taxing maneuvers and takeoffs. We had all of our camera mounts inside the F-18, so in one or two sequences where Maverick is literally taking off and you see the world receding behind him, we shot those back plates on the F-18’s internal cameras without someone sitting in the seat. The cockpit component of our full-scale jet came off and went onto one of [Special Effects Coordinator] Scott Fisher and his team’s gimbals, which we were able to put onstage.”
—Ryan Tudhope, Visual Effects Supervisor
Reskinning of the jets was reliant upon the original proxy version captured during principal photography. “We had a lighting reference in the case of the Navy jets. They are matte grey which is perfect for us, so we were able to see what the lighting characteristics were,” notes Tudhope, who used a combination of tracking markers, GPS from the camera aircraft and corresponding USGS data to get the lighting correct. Another major process was constructing the digital versions of the jets. “We get up close especially to the F-18s where they have all kinds of little dents, imperfections, bolts and rivets, been painted over a couple of times – there is a lot of detail that you want to try to capture. We were able to have a real turntable of a F-18, and we put our CG turntables right next to that. We were able to make sure that they were matching.”
The VFX team made simulations from lighting and atmospheric standpoints and match-moving the real jet relative to the terrain so the digital jets were moving at the same rate of speed.
One way to alter aerial missions was by adding digital aircrafts into shots. “There are many sequences that feature four jets, two teams of two, that are flying into the final mission or various training missions,” Tudhope states. “Typically, we shot those with one or two F-18s and added the other F-18s in those formations. It gave us the ability to get more material practically, and since we always had a real jet in there as reference it was a huge help in matching the look and lighting. It was really fun and nerdy because we would have a real jet doing a maneuver and add a CG jet doing a similar maneuver following behind, or in the shot where they all come through the valley and the vapor trails are going off and rush under the camera. That was one jet, and we added multiple jets doing the same thing. What is fun about that is you have this perfect thing that you’re trying to match.”
Appearing in the opening is a fictional stealth aircraft inspired by the hypersonic strategic reconnaissance UAV (Unmanned Aerial Vehicle) Lockheed Martin SR-72. “For most of the scenes on the ground we filmed the practical Darkstar and removed the towing vehicle digitally, and added all of the heat, haze and exhaust as if it was moving under its own power,” Tudhope remarks. “There are also moments where we shot real F-18s doing those taxing maneuvers and takeoffs. We had all of our camera mounts inside the F-18, so in one or two sequences where Maverick is literally taking off and you see the world receding behind him, we shot those back plates on the F-18’s internal cameras without someone sitting in the seat. The cockpit component of our full-scale jet came off and went onto one of [Special Effects Coordinator] Scott Fisher and his team’s gimbals, which we were able to put onstage.”
One of the trickiest elements to recreate was the canopy glass with all of its imperfections.
The stratosphere had to be recreated, explains Tudhope. “We were able to find amazing reference of weather balloons and SR-71 flights where cameras had been taken to those altitudes. All of this had to be created as a digital environment.” An emerging technology is a pivotal part of the Darkstar narrative. “As you’re seeing our sequence unfold,” he continues, “the altitude that we’re conveying and the things that occur, the look of all that from a physical standpoint is based on the data that Lockheed Martin [gave us on the scramjet engine].” A certain amount of disbelief was required when it came to the camera mounts. “When you get to the training missions and the exterior camera mounts that [Cinematographer] Claudio Miranda engineered with the Navy to place real cameras on F-18s, that process of placing of real cameras on real aircraft was extended early on in the film to Darkstar, even though it was digital, and also later into the final battle – that was the DNA to what we were doing.”
Digital jets were added for safety reasons and to get the desired formation and shot composition.
Leading the way were the cameras and lenses. “Rather than design shots that we would have to modify the mounts or change lenses,” Tudhope reveals, “we determined where the mounts were going to be and what lenses would be on those particular frames and create a composition that Joe was after. We took creative liberty where we were putting that camera on the digital jet versus a real jet. We were given a large toolbox of mounts and camera platforms to try to create shots with, and the process was, ‘What is the best way to film a plate to do this shot?’ Sometimes it was a one-to-one match and other times we would modify what we filmed in order to accomplish the shot.”
“When you get to the training missions and the exterior camera mounts that [Cinematographer] Claudio Miranda engineered with the Navy to place real cameras on F-18s, that process of placing of real cameras on real aircraft was extended early on in the film to Darkstar, even though it was digital, and also later into the final battle – that was the DNA to what we were doing.”
—Ryan Tudhope, Visual Effects Supervisor
Locations had to be digitally augmented, especially for the third act battle. Comments Tudhope, “We spent a lot of time scouting up in the Cascade Mountain Range in Washington State for this snowy environment and worked out of Naval Air Station Whidbey Island. In the film, there is an enemy base situated at the bottom of a bowl. We found half of what we wanted and augmented real footage to get what we needed. We had an amazing locations team and pilots from the Naval Air Station who would go out with GoPros in jets and fly some of these runs for us and show us what it might look like.”
Actress Monica Barbaro and Tom Cruise on set with a special camera rig developed by Cinematographer Claudio Miranda and the U.S. Navy.
One of the essential collaborators was editor Eddie Hamilton and his team. “The editing and going through all of this footage to try to put these shots together was a huge component. We would come across shots that we had nothing for, and it might be just a storyboard. In those situations, I would work with our Visual Effects Editor Latham Robertson and pour through the material that we had captured and find different options for shots and background plates, get Joe to sign off on that or get him to choose what he wanted and go from there. We went through an extensive postvis process, so we worked very loose and fast. What missiles they have remaining was a big thing, especially on the Tomcat because there are two Sidewinders. That stuff was all tracked. Once the cut started to settle down and we felt that we’ve got this sequence coming together, then we would turn over the shots to Method Studios or MPC, and they would execute all of the beautiful work that they did.”
By MATT HURWITZ
Images courtesy of Warner Bros.
Dwayne Johnson “floating” out of the Rock of Eternity on set at Trilith Studios in Atlanta, lifted by one of the special effects department’s robotic arms.
Watching director Jaume Collet-Serra’s Black Adam, audiences are easily convinced that the Warner Bros./HBO Max saga was shot in a Middle Eastern city, nowhere near the Atlanta, Georgia set on which it was filmed. “We’re always most proud of things no one ever thinks are visual effects,” notes Oscar-winning Visual Effects Supervisor Bill Westenhofer (Life of Pi). “The goal is to work yourself out of any recognition.”
The film was lensed by DP Lawrence Sher (The Joker) with production design by Tom Meyer. Its primary visual effects producer was Wētā FX under the production supervision of Westenhofer in tandem with Wētā VFX Supervisor Sheldon Stopsack. Additional VFX work was by provided by Digital Domain, Scanline VFX, DNEG, Rodeo FX, Weta Digital, Lola Visual Effects, Tippett Studio, Clear Angle Studios, Effetti Digitali Italiani (EDI) and UPP. Special Effects Supervisors were Lindsay MacGowan and Shane Mahan for Legacy Effects.
Dwayne Johnson battling with Aldis Hodge’s Hawkman in the Sunken City exterior set at Trilith Studios. The athletic Hodge was suspended by wires, his wings – as well as the set extensions beyond the ground-level set – added later by Wētā FX.
The story takes place in fictional Shiruta, the modern-day version of Kahndaq, where 5,000 years prior, Teth-Adam (Dwayne “The Rock” Johnson), the people’s hero with great superpowers, had been imprisoned in the Rock of Eternity for apparently misusing his powers. He is released by a rebel (by utterance of the word “Shazam”) and brought back to battle the people’s modern-day oppressors, the Intergang. While he initially also battles the four members of the Justice Society of America – Doctor Fate, Hawkman, Atom Smasher and Cyclone – they end up fighting Intergang together, eliminating the threat posed by not only that group, but Sabbaq, who also arises from the darkness of old Kahndaq to attempt to claim his throne. By the end of the film, he has succeeded in eliminating the threat, and the hero is renamed Black Adam.
“[S]ince [director] Jaume [Collet-Serra] and [DP] Larry [Sher] were actively participating in creating the previs, they felt ownership. So, when we got to set, they knew that the previs was theirs and that was the path they were going to follow, as opposed to getting there and going, ‘Oh, there’s that previs – forget that, we’re gonna do our own thing.’ It’s amazing – you can look at the previs and look at the shots and they’re incredibly close.”
—Bill Westenhofer, Production Visual Effects Supervisor
Development of Black Adam began in 2019, with Westenhofer being brought on not long after Collet-Serra came on to helm the project. By that time, the director had worked with storyboard artists to flesh out his ideas. Then, they met to decide the best state-of-the-art methods to create the imagery the director had in mind. “LED walls were hot at the time, as was volume capture, and we ended up dabbling in all of them,” Westenhofer remarks. The production was intended as a full-scale virtual production, developed initially by Tom Meyer in ZBrush. “We had a motion capture stage setup and had motion capture performers, and we had real-time controls,” Westenhofer adds. “We were due to start on March 17, 2020 – and then the world closed down.”
Johnson in a completed “flying” shot. He was first captured laying flat in an Eyeline Studios volume capture stage with the rig later removed and extensive background animation added.
Over the pandemic hiatus and through Fall 2020, L.A.-based Day For Nite continued work creating previs for the scenes, importing the Maya storyboard files into Unreal Engine. “Right away, we can see things that are working and ones that are not,” Westenhofer states. What read in the script as “He comes out, they fight, he flips over a tank” was soon developed into fully-realized scenes.
At the same time, DP Sher began setting cameras and lighting, working with Day For Nite and Collet-Serra via Zoom. “It was great because since Jaume and Larry were actively participating in creating the previs, they felt ownership. So, when we got to set, they knew that the previs was theirs and that was the path they were going to follow, as opposed to getting there and going, ‘Oh, there’s that previs – forget that, we’re gonna do our own thing.’ It’s amazing – you can look at the previs and look at the shots and they’re incredibly close.”
Johnson “floats” down the stairs of an apartment set, standing on the small platform of an industrial automobile robotic rig.
Deciding which locations seen in the previs would be practical sets and which would be CGI was an important step. “You can look at the previs,” Westenhofer explains, “and you can see if Jaume wants to be looking in a specific direction most of the time, in which case we would build that part as a set. But as soon as that set has more than one story to it, construction costs start to go up. So, for things like the city, Shiruta, I told Tom just to focus on, say, the first story, store level, and we’ll take care of the rest.” The same goes for which characters would be digital and which would be captured in- camera. “I always try to favor scenes where there are people – humans not flying around and who aren’t superheroes. But we have a movie where there are five superheroes and four of them fly in some form. So, they’re going to be mostly digital,” Westenhofer declares.
Building a City
When filming finally began, Meyer constructed the ground-level set of Shiruta on the back lot at Trilith Studios in Atlanta, notably its Central Market or the “Sunken City” where a great amount of action in the film takes place. “It actually doubles for many places in the city,” Westenhofer explains. “We had a roundabout area and several cross-streets, and if you look in one direction, that would be the area around Adriana’s (Sarah Shahi) apartment, and if you look the other way, it was where the palace would be. And when they’re seemingly driving through town, they’re really going in circles, but by changing the set extension it felt like you were traveling through the city.”
Wētā’s Assets Department, which includes its Art Department as well as modelers, texturers and shading specialists, were responsible for crafting the city, rooted in Meyer’s design for the practical set. “Tom did a magnificent job of fleshing out the tone and feel of Shiruta,” Stopsack states. “So, a lot of the groundwork was done already. We engaged with Tom quite early. Then we spent a fair amount of time designing the architecture and the whole feeling of the city square.”
A floating Dwayne Johnson, suspended by an industrial automobile robotic arm, does away with two bad guys.
“[Black Adam] not only flies, he floats. In the comic books, he says he doesn’t want to share the ground with lesser beings. So, he feels like he should float. But we wanted Dwayne to be in the scene, and we didn’t want to have him always be bluescreen, having to shoot him looking at tennis balls. [Director] Jaume [Collet-Serra] wanted it to be super smooth, not having to expend any effort, just floating.”
—Bill Westenhofer, Production Special Effects Supervisor
The look established by Meyer, Stopsack notes, “as he often described to us, was like Middle East meets Hong Kong. It needed to be dry, somewhat monochromatic and reasonably high. A lot of chaotic streetlamps, wires and air conditioning units everywhere.” Much of that was introduced to Wētā early in concept art, and then it was up to them to flesh out the environment. Adds Stopsack, “We had the luxury of photography of Tom’s set on the backlot, which gave us a starting point via plate photography, which we then extended.” That task required extension to show the entire city of Shiruta, to allow creation of high wide shots, which would include the Central Market and Palace in the extended terrain. “We knew whatever we would start building around the Sunken Street ultimately would be utilized for propagating the wider city.”
The Asset Team’s approach was to essentially create modular building blocks to create different architectural levels and stories of each building. “We had to interchange them, dress them differently and stack the buildings up to a height that Tom deemed necessary,” Stopsack explains. “For each building we would ask, ‘Would you like this to be 15 stories high? 10 stories? Is this a round building? Do we see filigree here?’ So, we had a lot of engagement with him to make sure that the look and feel was what he envisioned.” They took advantage of any assets the Art Department had available, including packages of art, signage and other items, to lean into the same language Meyer’s teams had developed. “It’s an endless chase of getting the level of detail that you’re after.”
Wētā’s attention to detail translated to a construct that looks like a true city and not a visual effect. At the same time, constructing the entire city digitally – including the entirety of Meyer’s Sunken City area sets – gave Wētā valuable flexibility for creating scenes of mayhem which otherwise would have required destruction of the practical set. “The beauty of approaching it that way,” Stopsack observes, “is that we were left with an all-digital representation of the practical set pieces that were built. So, in the fight between Black Adam and Hawkman, if Black Adam is punched and smashes down the side of the building, those shots could be created fully digital. The entire environment was fleshed out, so we could inject these all-digital shots in between.”
In order to develop a true city grid seen in high wides, Wētā’s layout team utilized Open Streets map data, accessing real-world locations as the basis for Shiruta’s street layout. Comments Stopsack, “We looked at Middle Eastern cities around the globe to look at each’s city grid to study the general density of population and buildings and the buildings’ heights. A lot of data can be sourced, and we used that to lay a foundation for what Shiruta became.”
Pierce Brosnan on set wearing his “faux cap” suit with optical markers, holding his helmet. The remainder of his costume was created digitally, as seen in the final shot.
“[The look established by Tom Meyer] as he often described to us, was like Middle East meets Hong Kong. It needed to be dry, somewhat monochromatic and reasonably high. A lot of chaotic streetlamps, wires and air conditioning units everywhere. We had the luxury of photography of Tom’s set on the backlot, which gave us a starting point via plate photography, which we then extended. We knew whatever we would start building around the Sunken Street ultimately would be utilized for propagating the wider city.”
—Sheldon Stopsack, VFX Supervisor, Wētā FX
Moving Black Adam
As lead effects vendor, it fell to Wētā to develop the character animation models and movement, which were then shared with the other vendors for creation of their own scenes. “We were engaged fairly early on, when Bill asked us to start doing motion studies – even before building any of the Shiruta environments,” Stopsack explains. “These were done, in part, to inform how they would be shot on the practical set, like how they engaged in flying action or how Hawkman would land.”
Hawkman actor Aldis Hodge did quite a few stunts himself, such as his dives into the Central Market on a wire, touching down. “We had him rehearse with counterweights attached to his costume to give him a sense of what the wings would feel like, informing his performance,” Westenhofer notes. He was also given lightweight cloth cutouts of the wings to allow the set team to get an understanding of their size, how they would articulate, and allow DP Sher space to plan for in his frame and allow for the future-digital wings to have a home.
The motion studies also helped, working with Costume Design and the Art Department to nail down costume design and motion. Says Stopsack, “Some characters that were not completely digital had costumes that needed to be practically built, such as Hawkman and Black Adam – Hawkman’s wing design, for instance, looking at their mechanics, how do the wings unfold? Things like that.” Other designs, like Dr. Fate’s costume, were completely digital, requiring more creative input from Wētā.
Director Jaume Collet-Serra, left, discusses a scene on set at Trilith Studios in Atlanta.
A key part of Black Adam’s motion involves his simple floating movement within a scene. “He not only flies, he floats,” Westenhofer explains. “In the comic books, he says he doesn’t want to share the ground with lesser beings. So, he feels like he should float. But we wanted Dwayne to be in the scene, and we didn’t want to have him always be bluescreen, having to shoot him looking at tennis balls. Jaume wanted it to be super smooth, not having to expend any effort, just floating.”
Special Effects Supervisor J.D. Schwalm was tasked with offering practical methods to accomplish the float. The main mechanism was provided by an industrial automobile robot used in car manufacturing. “That was the coolest one,” continues Westenhofer, “which we had mounted on the set and could be programmed to pick him up and float him and move him around,” with Johnson standing on the rig’s platform, his legs being replaced later and the rig removed. “It allowed him to act. When he’s floating down the stairs, passing the kid, he could do back and forth banter and actually be in the scenes with other characters. That was really important.”
For simpler shots, Schwalm provided a small robotic cart about 2½ feet by 2½ feet, with a robotic hydraulic arm containing a saddle and a small foot platform, allowing Johnson to be raised or lowered up to four feet versus the industrial robot, which could lift him as high as 15 feet. “These sorts of things could also be done using wires, but Dwayne found this really comfortable, and it allowed him to interact naturally,” Westenhofer notes.
“[The Central Market on the set of Shiruta] actually doubles for many places in the city. We had a roundabout area and several cross-streets, and if you look in one direction, that would be the area around Adriana’s (Sarah Shahi) apartment, and if you look the other way, it was where the palace would be. And when they’re seemingly driving through town, they’re really going in circles, but by changing the set extension it felt like you were traveling through the city.”
—Bill Westenhofer, Production Visual Effects Supervisor
Dwayne Johnson in the Sunken City set during a fight scene. Production Designer Tom Meyer’s Central Market was replicated by Wētā FX in set extensions.
For his flying sequences, the VFX team made a volume capture system provided by Eyeline Studios, a division of Scanline VFX. The system once again allowed Johnson’s performance to be captured in a method quite a bit different from motion capture. The actor would lay flat on the rig, surrounded by an array of hi-res video cameras (versus infrared, as would be used in mocap). Eyeline then processes the data in its proprietary system and provides an extrapolated mesh and a set of textures. Explains Stopsack, “When the data comes to us, we then have the geometry of his performance, of his head, and we have the texture that maps onto it.”
“[The industrial automobile robot was] mounted on the set and could be programmed to pick [Johnson] up and float him and move him around [with Johnson standing on the rig’s platform]. It allowed him to act. When he’s floating down the stairs, passing the kid, he could do back and forth banter and actually be in the scenes with other characters. That was really important.”
—Bill Westenhofer, Production Visual Effects Supervisor
Wētā took the process a step further to retain Johnson’s natural head motion. “We took Eyeline’s mesh and tried to incorporate that into our full-blown Black Adam digital double,” Stopsack remarks. “We could then take their head motion data and combine that onto our puppet so that the head motion would track perfectly with our digital asset, with our digital head motion. But volume capture gives you limited body motion. If you have pretty intricate body motion, your head motion can quickly go off what the volume capture would allow, such as if the head goes backwards and you want an extreme that it won’t permit. So, our animators would then see those constraints and work within them to see how far we could bend the head back without going beyond what volume capture could support,” preventing the bend from appearing too rubbery, unlike a real person’s movement. Stopsack adds, “We used the technology for a small number of shots, but it was great when you needed the unmistakable like of the actor.”
In addition to Eyeline’s cameras, the actor was surrounded by LED walls playing back material created in Unreal, working early on with Scanline and Digital Domain, which provided interactive lighting on Johnson’s costume. The backgrounds, of course, were replaced later. LED walls came in handy for other sequences, such as filming the cockpit scenes in the Hawk Cruiser as it crashes into the Central Market. “The cockpit set was too big to place it on a gimbal,” Westenhofer reveals. “Instead, we had the content playing back on the LED screen, which was designed as being from the point of view of the cockpit so they could see themselves flying through space and crashing, and it gave them enough inspiration to sway and move as the craft was bucking in space.” For lighting, he says, “It worked really well inside the cockpits. We did replace some backgrounds, but the interactive light worked really well.”
Aldis Hodge as Hawkman. The character’s wings were added digitally, though Hodge was provided lightweight cloth cutouts to allow the actor and on-set teams an idea of the space that would be taken up by the finished digital product.
Using LED walls is not something to do frivolously, Westenhofer notes. “A lot of people come to this and hope to get what they call ‘final pixel,’ meaning you film it and the shot is done. There needs to be a fair bit more work done to get LEDs to the point where that’s really successful. You need a lot more time in prep to build these CG backgrounds, but then no one can change their mind afterwards. If you do that, it’s baked into the sauce.”
Towards the end of the film, we see Teth-Adam’s backstory in a flashback revealing the death of his son, Hurut, before he became the “The Rock”-sized superhero. For those scenes, a much slimmer double (Benjamin Patterson) was used onto which Johnson’s face was later applied. “We’d have Dwayne do a pass just to get the performance, and then the double would come in and repeat the same timing and performance. So it would be his body,” Westenhofer explains.
Later, after the scene was cut together, Johnson’s head and face were captured by Lola Visual Effects using their “Egg” setup, a system somewhat similar to volume capture. “Dwayne would sit down in a chair surrounded by several cameras,” Westenhofer describes. “Lola had studied our footage and setup lighting timed to replicate interactively the way the light on set interacted with the double throughout the shot, using colored LEDs. They could tell Dwayne to ‘Look this way’ or ‘Get ready to turn your head over here,’ and they would time the playback so he’d give the performance and move his head, give the dialogue matching what we captured on set from the other actor. Then, that head is projected onto a 3D model and locked into the shot itself, so you have Dwayne’s head and the skinny actor’s body.”
Before and after shots of a battle sequence in the Sunken City show the extent of Wētā FX’s detailed design work in set extensions and effects animation.
For Pierce Brosnan’s character, Dr. Fate, it was the opposite case. Brosnan’s own performance was filmed on the set and his body was replaced. “When he’s flying, it’s all CGI,” says Westenhofer. “But when he’s on the ground interacting with other characters, his costume has more life than a practical costume would have, so the costume is digital.”
Instead of using motion capture where Brosnan would have been filmed alone on a mocap stage, a “faux cap” system was used. Brosnan appeared on set in a simple gray tracking suit. Explains Stopsack, “It doesn’t have a full- blown active marker setup as a motion capture setup would have. The suit is simply peppered with optical markers, which are not engaged with any computer system but simply photographed with witness cameras. Our Match Move Department then uses some clever algorithms to triangulate their location and extract motion. We needed to see Pierce’s performance, his persona as an actor on set engaging with all of these characters. Then the superhero suit followed after.”
I’ve been asked which of these characteristics that describe me (disabled with spinal muscular atrophy, Chinese-American, woman) have posed the biggest challenge in moving forward, and I’d say being a woman in this business. Even to this day, when I show up to set as a VFX supervisor, the first question I’m asked is “who are you here visiting?” It’s an everyday thing that will change with time. The more women are seen and empowered in senior roles, the less these trivial questions will come up. I took a leap of faith in starting my own company, and I am committed to achieving greater equity and opportunity for everyone in VFX.
I was a single working parent early in my career, and the issue of balancing a career and family is highly personal. I was able to figure out a way where I did not have to sacrifice one for the other – but so many parents, particularly women, feel backed into making that tough choice. Women in Animation is focused on the enormous need to provide job flexibility and more support for working parents and caregivers. The number of women who have had to walk away from their jobs because of the lack of childcare, its staggering cost and not enough options for hybrid work schedules is startling, and that has all been exacerbated by COVID. We need to do better and lift up this advocacy movement.
Growing up amidst war in the Democratic Republic of the Congo in central Africa, I made a decision to pursue art to inject life into something I drew with my own hands and give it back to the people. I created The Third Pole initiative, a CG education program, to work with youth in my home country and give them the tools and the mentorship to be powerful visual storytellers. We know how Western and Asian cultures tell their stories, but not as much how Africa would tell theirs and contribute to our collective global storytelling. It’s so important to be able to preserve our oral histories; our legends are vanishing in our own time.
The lack of female visual effects supervisors is definitely the result of a lack of opportunity and unconscious bias – and that is fixable. Earlier in my career I was told that the goal was to promote the male supervisors, and I watched as guys who had worked under my VFX supervision were promoted up the ranks and given opportunities on large VFX shows. It never occurred to me that my gender would hold me back, and I was always surprised when it did. I am a strong believer in diversity and inclusion, not just because I am a bi-racial woman, but because I believe that greater diversity leads to freer thinking and greater creativity.
Creating my film Mila was a lifechanging experience, inspired by the stories my mother told me about how she felt as a child during the bombings of Trento in World War II. I fully embrace the power of animation. Hollywood might applaud socially relevant features, but it still views animation as essentially little more than “entertainment.” It has enormous potential to affect fundamental change in how we approach each other and how we deal with societal challenges. I believe that stories told through the magic of animation can move people and influence our future generations like nothing else can.
Join us for our series of interactive webinars with visual effects professionals.
As your questions, learn about the industry and glean inspiration for your career path.
Register today at
VESGlobal.org/AMA
By TREVOR HOGG
Images courtesy of HBO.
Assisting in deciding what sets needed to be built practically or digitally was having the 10 scripts essentially written before shooting commenced.
Unlike Game of Thrones, the prequel House of the Dragon, which revolves around the decline of Targaryen rule, has to deal with the expectations of its predecessor that push the boundaries of high-end episodic visual effects to achieve filmic quality. The first season consisting of 10 episodes was able to take advantage of the new virtual production stage and Warner Bros. Leavesden Studios, with showrunners Ryan Condal and Miguel Sapochnik collaborating with Visual Effects Supervisor Angus Bickerton (The King’s Man) to achieve the necessary size and scope and many more dragons for the epic fantasy HBO series. (Sapochnik has since moved to an executive producer role.)
The goal was to create a dirtier, grungier and dustier environment than Game of Thrones, which occurs 130 years later.
Bickerton joined the project back in September 2020, and at that point the scripts for the 10 episodes were essentially written. “That’s an important thing to say because as we know all TV productions are still evolving as they’re going along. You need to have settled scripts in order to say, ‘These sequences are going to be done in the volume.’ If we wanted to shoot the interior of Storm’s End Castle in Episode 110, instead of 12 weeks in post to do that environment, we needed 12 weeks prior to shooting to build it in Unreal Engine for texturing, lighting, doing test plays in the volume, to make sure it was coming out right, and working with the DPs and art department to decide which bits we were going to put on the screens and what would be sets.”
Around 2,800 visual effects shots were created for the 10 episodes, ranging from tiny birds in the frame to dragons.
A key principle for dragons is that they keep growing.
Some of the street scenes were captured in Spanish and Portuguese locations, but the rest were shot either on the virtual production stage or in the backlot at Leavesden Studios. “We had an oval space with a 270-degree wraparound screen, and it’s about 65 to 70 feet wide by 85 feet deep,” Bickerton explains. “We hung doors to block off the rest of the oval so we could make an almost 360-degree volume. Above that, we have our ceiling, which was on panels so we could raise and lower them. Normally, you drop that ceiling just inside the wall height. Our screen was 25 feet high. When you’re inside and look up, the ceiling blends into the wall. It’s a balancing act. You have to find a position where it’s slightly the wall height, but the 40 tracking cameras arranged around the screen still need to be able to get a view of the camera in order to real-time track the camera, in order to create the interactive environment on the screen.”
“Once you’ve built this beautiful cathedral, the last thing you want is to start blowing smoke and have hot flames melt the LED panels. But we wanted candles, flame bars, driving rain and smoke. The first thing that we did was to concede some of the screens to create ventilation space for smoke. The screen was lifted above the flame bar element to get it further away from the flame.”
—Angus Bickerton, Visual Effects Supervisor
As with Game of Thrones, House of the Dragon features extensive smoke, fire and rain, which meant that special effects had to occur within the virtual production stage. “Once you’ve built this beautiful cathedral, the last thing you want is to start blowing smoke and have hot flames melt the LED panels,” Bickerton notes. “But we wanted candles, flame bars, driving rain and smoke. The first thing that we did was to concede some of the screens to create ventilation space for smoke.”Additional ventilation was placed under the screens so the air was constantly moving. “The screen was lifted above the flame bar element to get it further away from the flame,” Bickerton adds. “When it came to storm sequences, we had to figure out the orientation of our motion base so we could blow the smoke and rain atmosphere past the actors and it would go across the screen. We could have separate fans blowing it away from the screen as well as off-camera.”
Sunrise and sunsets can be shot over the course of days on a virtual production stage with the same lighting conditions being maintained.
An iconic prop that makes an appearance in House of the Dragon is the Iron Throne.
“[For the flying dragon shot in Episode 110], The Third Floor did the previs that was animated with much simpler dragon assets to make sure that we were doing the right dragon motion. The Third Floor’s simulation files were given to Pixomondo, which tweaked and revised the animation that was then given back to The Third Floor, which rebuilt it for the motion base, volume and camera, and we worked out what camera moves that we had to do with the actors to match the previs.”
—Angus Bickerton, Visual Effects Supervisor
Special Effects Supervisor Michael Dawson and his team built a new motion base that could bank, pitch and rotate. “The motion base exceeded our expectations,” Bickerton remarks. “We got fast movement, good angle changes, could throw the actors around quite considerably and get shakes in their bodies. The Wirecam was more of a challenge to move around fast because you have to ramp up to speed, fly past an actor and ramp down again. [For the flying dragon shot in Episode 110], The Third Floor did the previs that was animated with much simpler dragon assets to make sure that we were doing the right dragon motion. The Third Floor’s simulation files were given to Pixomondo, which tweaked and revised the animation that was then given back to The Third Floor, which rebuilt it for the motion base, volume and camera, and we worked out what camera moves that we had to do with the actors to match the previs.”
Concept art by Kirill Barybin showing the scale of Prince Lucerys Velaryon and Arrax, which is a 14-year-old dragon.
The 2D concept art of Arrax was translated into a 3D blockout by Kirill Barybin.
“[The dragons] ultimately can’t bear their own weight. Vhagar, which is chasing Arrax, is meant to be 103 years old whereas Arrax is 14 years old. Whenever a new member of the Targaryen family is born a dragon is put in the crib with the child so that they develop a symbiosis. But there is only so much control that you have over these dragons. In the shot where you see the big silhouette of Vhagar above Arrax was a signature image that we wanted going into the sequence to show the size of him. In terms of how the motion base moved, Arrax is flappier and smaller, so it has more aggressive motions whereas Vhagar is a huge beast and the motions are a lot more general.”
—Angus Bickerton, Visual Effects Supervisor
A narrative principal is that dragons keep on growing. “They ultimately can’t bear their own weight,” Bickerton notes. “Vhagar, which is chasing Arrax, is meant to be 103 years old whereas Arrax is 14 years old. Whenever a new member of the Targaryen family is born a dragon is put in the crib with the child so that they develop a symbiosis. But there is only so much control that you have over these dragons. In the shot where you see the big silhouette of Vhagar above Arrax was a signature image that we wanted going into the sequence to show the size of him. In terms of how the motion base moved, Arrax is flappier and smaller, so it has more aggressive motions whereas Vhagar is a huge beast and the motions are a lot more general.”
A dramatic action sequence is when Prince Lucerys Velaryon and Arrax are chased by Aemond Targaryen and Vhagar.
There were no static 2D matte paintings as the camera always had to be fluid. “The trick was to always have atmosphere-like particles in the air,” Bickerton reveals. “I remember working on our first environment and asked, ‘Should we add some birds?’ And it worked. There were birds all over the place. They were small in frame but were a key element in bringing life to the shot. Miguel wanted it to be dirtier, dustier, grungier than Game of Thrones because we are taking place 130 years before, so there was a lot of smoke, and King’s Landing has a nastier look.” Bickerton was give an eight-terabyte drive of assets from Game of Thrones by HBO that included the Red Keep and King’s Landing. Explains Bickerton, “They had been built by different facilities for each season, so we had about five or six different variations of the Red Keep and King’s Landing. Our Visual Effects Art Director, Thomas Wingrove, brought in the different models, and we came up with our own fully-realized 3D environment because we wanted to be able to come back to it and know where everything was. In Game of Thrones, they tended to add in bits when needed for each episode.”
“[Showrunner/director] Miguel [Sapochnik] wanted it to be dirtier, dustier, grungier than Game of Thrones because we are taking place 130 years before, so there was a lot of smoke, and King’s Landing has a nastier look. They had been built by different facilities for each season, so we had about five or six different variations of the Red Keep and King’s Landing. Our Visual Effects Art Director, Thomas Wingrove, brought in the different models, and we came up with our own fully-realized 3D environment because we wanted to be able to come back to it and know where everything was. In Game of Thrones, they tended to add in bits when needed for each episode.”
—Angus Bickerton, Visual Effects Supervisor
A signature shot is of shadow of Vhagar flying above Arrax.
2D and 3D techniques were combined to create the disfigured face of King Viserys I Targaryen.
Around 2,800 visual effects shots were produced for the 10 episodes. “If you’re going to have character who is 1/10th the screen size of a dragon, then it’s a digital double,” Bickerton states. “We used digital doubles for some of the fast action; otherwise it’s an element of someone on a motion base, if it’s dragon-riding. We tried to shoot an element for everything. There was quite a lot of face replacement for action and storm sequences.” All of the actors were scanned to various degrees, depending on how much of their performance is needed. Comments Bickerton, “In the tournament at the beginning of Episode 101, there are numerous face replacements. We had to do CG for half the face of King Viserys I Targaryen in Episode 108, towards the end of his final days. We did a lot of 2D warping and distortion to make his neck thinner and get his face to be gaunt. The bit I love is the sheer diversity of the work. There are so many different environments and dragon characters. That’s what I like.”
By TREVOR HOGG
Images courtesy of Prime Video and ILM.
The Martian terrain traveled by Oppy was given a reddish tint while the setting inhabited by Spirit had a bluish tint.
Considering the *batteries not included vibe, it is not surprising to learn that Amblin Entertainment was involved in producing the Prime Video documentary Good Night Oppy, which chronicles NASA’s successful development and launch of Mars rovers Opportunity and Spirit in 2003, with the former defying expectations by going beyond the 90-day mission and surviving for 15 years.
To re-enact what happened to the two rovers on the Red Planet, filmmaker Ryan White turned to ILM Visual Effects Supervisors Abishek Nair and Ivan Busquets to, in essence, produce an animated film to go along with present-day interviews and archival footage. “Ryan White wanted to make a real-life version of WALL·E [a small waste-collecting robot created by Pixar] in some ways, and mentioned during the project that E.T. the Extra-Terrestrial was his favorite film growing up and wanted to bring that emotion into it,” Nair explains. “For us, it was trying to maintain that fine balance of not going too Pixar, doing justice to the engineers who worked on the rover missions and forming a connection so that the viewers feel the same thing that the engineers went through when they working with Opportunity and Spirit.”
Amongst the 34 minutes of full CG animation was the landing of the rovers on Mars.
“For us, it was trying to maintain that fine balance of not going too Pixar, doing justice to the engineers who worked on the rover missions and forming a connection so that the viewers feel the same thing that the [NASA] engineers went through when they working with Opportunity and Spirit.”
—Abishek Nair, Visual Effects Supervisor, ILM
Creating a sense of a face was important in having the rovers be able to emote. “Early on in the show, Ryan was interested in exploring a range of emotions for these rovers and was doing that in parallel in sound and visual effects,” Busquets states. “He was trying to come up with a library of plausible looks so that we were not making a caricature. Even when animating the rovers, we observed the limitations of the joints and what the range of movement is. We did cycles of, what does sad or older-looking-moving Oppy look like? It was all based on, ‘let’s use what’s in there.’ The most obvious example was using the pan cameras as eyeballs because from a physical position, they do resemble the eyeballs on a person.”
ILM created a view of Mars from outer space.
Data was provided by the NASA Jet Propulsion Laboratory. “The rovers themselves are the most accurate versions of Opportunity and Spirit,” Nair observes. “We would send turntables of the rovers to the JPL and they would point out certain things that felt a little off, like how the robotic arm would bend and including the decals/details on the rover itself. We built up the rovers with some of the stickers that were on the prototypes and those were taken off when the rovers went to Mars. We had to keep all of those things in mind. It was a good symbiotic process. The engineers at JPL were excited that we were breathing life into the rovers.” The models for Opportunity and Spirit were the same but treated differently. “We respected the story, like when they needed to compensate for how Spirit was to be driven after one of the wheels broke,” Busquets states. “All of those animation cues were respected, so we did animate Spirit differently than Oppy. Then there are differences as to the environments that they were in, and those were kept realistic and true.”
“Early on in the show, [director] Ryan [White] was interested in exploring a range of emotions for these rovers and was doing that in parallel in sound and visual effects. He was trying to come up with a library of plausible looks so that we were not making a caricature. Even when animating the rovers, we observed the limitations of the joints and what is the range of movement. We did cycles of, ‘what does sad or older-looking-moving Oppy look like?’ It was all based on, ‘let’s use what’s in there.’ The most obvious example was using the pan cameras as eyeballs because from a physical position, they do resemble the eyeballs on a person.”
—Ivan Busquets, Visual Effects Supervisor, ILM
Both environments did not share the exact same color palette. “The Spirit side of the planet had more of bluish hue to it whereas the Oppy side was redder,” reveals Nair. “Also, whenever you see the split-screen, Oppy is on screen left and Spirit is on screen right. and that was maintained throughout the documentary. There was always this visual reference as to who was where, who is doing what and even the direction that they move. Oppy would always go left to right while Spirit was right to left. We built in these little cues to psychologically know that right now you’re looking at Spirit not Oppy. As the story progressed, Spirit had a broken wheel so that helped.”
Adding to the drama was having the rovers get stuck in sandpits and trying to get out.
“The Spirit side of the planet had more of bluish hue to it whereas the Oppy side was redder. Also, whenever you see the split-screen, Oppy is on screen left and Spirit is on screen right. and that was maintained throughout the documentary. There was always this visual reference as to who was where, who is doing what and even the direction that they move. Oppy would always go left to right while Spirit was right to left. We built in these little cues to psychologically know that right now you’re looking at Spirit not Oppy. As the story progressed Spirit had a broken wheel so that helped.”
—Abishek Nair, Visual Effects Supervisor, ILM
Four major dust variants were created for Spirit and Oppy. “As the shots progressed, we started running effects simulations and dust maps on it so we could turn them up or down depending on the shots themselves,” Nair notes. There was not a lot of room for creative licence. “Normally we would go with what makes for a more cinematic shot, but with this being a documentary we kept it grounded in reality as much as possible,” Busquets states. “A place where we did make a concession was when it came to the speed. The maximum speed of the rovers was something like two inches per second. It became obvious when we started animating that we were not going anywhere. How are we going to tell a story with that?”
A critical part of making the imagery believable was incorporating photographic aberrations such as lens flares.
Since visual effects was a new area for Ryan White, ILM produced storyboards and previs that also aided editorial. “The documentary style of filmmaking is different from feature film,” Nair observes. “We had to make sure that we get some fairly detailed storyboards going for key shots at least and rough storyboards for the rest that we would be doing which would then inform us in terms of the beats, length of the shots and how it’s sitting in the edit. When it came to the particular shot of Oppy getting her wheel stuck in the stand, we had some fairly detailed storyboards, but then we went through quite a bit of postvis animation to get the idea across of the wheel spinning. We also had to work with some clever camera angles that would tell the story. We were working within a timeframe and budget and trying to make sure that visually it was telling the story that was supposed to be told there. There were pockets of sand simulation that we did early on to show the wheel spinning and kicking out of the sand. We showed that to Ryan who was excited about it, and then we brought in all of those little animation cues of Oppy struggling trying to go in reverse and get out of that sandpit.”
The pan cameras on the rovers were treated as if they were eyes, which helped to give them a personality.
Sandstorms had to be simulated. “We had photographic reference of sandstorms on Mars, so we knew exactly what it would look like,” Nair explains. “We’ve done sandstorms before on various movies, but we had to make sure that these would actually happen on Mars: the little electrical storms that happen within them that have bolts of lightning. That’s where we could bring a little bit of drama into the whole thing by having the bolts of lightning and closeups of Oppy staring up at the sandstorm and lightning flashes on her face. There were tons of auxiliary particles flying around the area around her and tons of sand bleeding off her face and solar panels. We did run that through layers of simulations and then threw the whole kitchen sink at it and started peeling back to see what we could use and omit to bring the storytelling back into the whole thing.”
“The number of unique locations, from their landing sites to the journeys, to the different craters that they visit, the amount of nuance and rocks and different type of terrain, everybody involved felt there was something special about building something not based on concept art but scientific data. However, you want to make it as photographic and exciting as possible. There was a lot of pride I saw in the team in doing that.”
—Ivan Busquets, Visual Effects Supervisor, ILM
The edit was a work in progress. “What was challenging and unique about this project was being involved from an early stage and they hadn’t finished all of their interviews,” Busquets remarks. “Ryan had some ideas for the chapters that he wanted to cover. We helped to inform the edit as much as the edit helped to inform our work. It made things a bit slower to progress, and we had to rely on rough animation and previs to feed editorial.”
Four major dust variants were created for Spirit and Oppy.
No practical plates were shot for the 34 minutes of full CG animation. “We asked to be sent to Mars to shoot some plates and were told that it would be too expensive!” laughs Busquets. “We did get a ton of data from NASA including satellite images from orbiters that have been sent to Mars. It was the equivalent of Google Earth but at a lower resolution. All of the environments that you see in the documentary are based on the real locations the rovers visited.” ILM had to fill in the gaps and could not use the actual imagery because they were not high resolution enough for 4K. A cool moment to create is when Oppy takes a selfie. “It was a fun sequence to do, and we followed the same arc of the cameras so Oppy could actually take the photographs,” Nair comments. “We did have reference of the separate images that were stitched together. We got our snapshots within that particular shot very close to what was actually taken. In the documentary we made it black and white and grainier compared to the other shots.”
Electrical storms had to be incorporated inside of the sandstorms that occur on Mars.
One of the most complex shots was depicting the solar flares hitting the spacecraft as it travels to Mars. “As an idea, it was storyboarded in a simple manner, and when we started looking at it we figured out that it wasn’t going to show the scale and the distance that these flares would travel or the danger that the rovers were in,” Nair states. “Working the timing of the camera move to the sun with the burst of flare energy… The camera takes over from there, follows the flare energy hitting the spacecraft and swivels around. That whole thing took a bit to plan out. It was a leap of faith as well because Ryan didn’t want it to look like too Transformers in a way. We had to keep things still believable but at the same time play around a little bit and have some fun with the whole thing. It’s one of our longest shots in the show as well. As for the other challenges, it was a documentary format where the edit was fluid, and we had to make sure it would conform with our timeline and the scope of work that was left to do.”
The environmental work was extensive. “The number of unique locations, from their landing sites to the journeys, to the different craters that they visit, the amount of nuance and rocks and different type of terrain, everybody involved felt there was something special about building something not based on concept art but scientific data,” Busquets remarks. “However, you want to make it as photographic and exciting as possible. There was a lot of pride I saw in the team in doing that.”
By TREVOR HOGG
Images courtesy of Marvel Studios and Digital Domain.
Production Special Effects Supervisor Daniel Sudick and his special effects teams built a 30- to 40-foot-section of the boat deck that was 15 to 20 feet up in the air.
Third acts are never easy as this is what the audience has been waiting for, and when it comes to the Marvel Cinematic Universe there have been a plethora of epic battles making things even more difficult to come up with something that has not been seen before. In Black Panther: Wakanda Forever, the Wakandans take a ship out into the ocean and successfully lure the underwater-dwelling Talokanil into a massive confrontation while the newly crowned Black Panther does single combat with Namor in a desert environment. States Hanzhi Tang, VFX Supervisor at Digital Domain, “I knew this movie was important, and having met [director] Ryan Coogler on set, you want him to succeed as he’s the nicest person. I’ve known [Marvel Studios VFX Supervisor] Geoffrey Baumann for a long time, so we had already a good working relationship; he trusted us with trying to help him navigate whatever surprises that would come up.”
The rappelling of the Dora Milaje was influenced by a dance troupe.
A back-and-forth between Digital Domain and Wētā FX ensured that their shots were seamlessly integrated with each other.
“We started off in the Atlantic Ocean and shared some parts with Wētā FX, which had already figured out the underwater and deep ocean looks. Digital Domain kept to above the surface and a couple of shots where characters had to go in and out water. There was a back and forth between us so to synchronize with each other as to the camera and the location of the water plane. Then we would do everything from the water surface and upwards. Then one of us had to do the final composite and blend the two together. Luckily, when the camera hit that water plane it acts like a wipe.”
—Hanzhi Tang, VFX Supervisor, Digital Domain
“We started off in the Atlantic Ocean and shared some parts with Wētā FX, which had already figured out the underwater and deep ocean looks,” Tang explains. “Digital Domain kept to above the surface and a couple of shots where characters had to go in and out water.” For the some of the underwater shots, Wētā FX provided the virtual camera as a first pass. “There was a back and forth between us so to synchronize with each other as to the camera and the location of the water plane,” Hang details. “Then we would do everything from the water surface and upwards. Then one of us had to do the final composite and blend the two together. Luckily, when the camera hit that water plane it acts like a wipe.” A giant set piece was constructed for the boat. “A 30- to 40-foot section of the boat deck was built that was 15 to 20 feet up in the air,” reveals Tang. “It was built as a rig that could tilt up to 45 degrees, because there is a point in the movie where the boat gets attacked and almost rolls over. People could slide down the deck. [Production Special Effects Supervisor] Dan Sudick and his special effects team had built one big in-ground tank to film people in the water, and separately this deck. As far as the water interaction on the deck, it was all CG.”
The Talokanil were supposed to have bare feet, which were inserted digitally for safety reasons.
A major task was adding digitally the rebreather masks worn by the Talokanil.
Plates were shot for the foreground elements with various bluescreens placed in the background. “All the way back was a set extension that was blended into the foreground,” Tang remarks. “Everyone in the background is a digital double.” The rappelling of the Dora Milaje was influenced by a dance troupe. Describes Tang, “There is a vertical wall where everyone does dance moves on cables that was the inspiration for the Dora Milaje being suspended. The whole thing was shot horizontally with them dangling off of cables. It was incredible.” The skies were art directed. “There was a lot of picking and choosing of the type of day and clouds,” Tang comments. “It ended up being a combination of CG and matte-painted clouds. The style of the on-set lighting by Autumn Durald Arkapaw, the cinematographer, was soft, and she would wrap the lighting around characters and give them a lovely sheen on their skin.”
“A 30- to 40-foot section of the boat deck was built that was 15 to 20 feet up in the air. It was built as a rig that could tilt up to 45 degrees, because there is a point in the movie where the boat gets attacked and almost rolls over. People could slide down the deck. [Production Special Effects Supervisor] Dan Sudick and his team built one big in-ground tank to film people in the water, and separately this deck. As far as the water interaction on the deck, it was all CG.”
—Hanzhi Tang, VFX Supervisor, Digital Domain
Shuri transports a captured Namor to a desert environment where they have engage in single combat.
Blue-skin characters, such as the Talokanil, against bluescreen is always a fun challenge, Tang reports. “Greenscreen would have been worse with the amount of spill, given that it was meant to be a blue-sky reflection,” he states. “We ended up doing roto on everything. The set is 20 feet in the air, people are being sprayed down with water, and there are all of these cables that need to be painted out. When the Talokanil board, you have 20 stunt people climbing the boat, and there’s no perimeter fence around this thing. For safety reasons, everyone had to wear decent footwear, and these characters were meant to be barefoot. They did not do the rubber feet that Steve Rogers wears in Captain America: The First Avenger, so we ended up tracking and blending CG for feet replacements. We also had to track and replace rebreather masks because the Talokanil wear them when they’re out of the water. It fits over the mouth and the gills on the neck. Those were impractical to wear, be running around and trying to perform the stunts in.”
“All the way back [for the rappelling of the Dora Milaje sequence] was a set extension that was blended into the foreground. Everyone in the background is a digital double. There is a vertical wall where everyone does dance moves on cables that was the inspiration for the Dora Milaje being suspended. The whole thing was shot horizontally with them dangling off of cables. It was incredible.”
—Hanzhi Tang, VFX Supervisor, Digital Domain
Bluescreen made more sense than greenscreen as it provided the correct blue spill that would have been caused by the sky.
Namor (Tenoch Huerta) is captured and Shuri flies him off into the desert because he gains his power from the ocean. “They have a one-on-one fight, and there was a lot of set extensions and cleanup of the background,” Tang remarks. “We put the sky and the sun in the right place.” A flamethrower was utilized on set for the desert explosion. “But it wasn’t anywhere near the size of the actual explosion in the movie. It was used for exposure, color and scale reference of how that size flame appears through the camera,” Tang says. The flying shots of Namor were sometimes the most difficult to achieve, he adds. “In some of the shots we would have captured Tenoch Huerta on bluescreen, and he’ll do some closeup acting,” Tang observes. “We tried some wirework that looked too much like wirework and ended up doing a half-body replacement from the chest down. They captured a lot of shots with him in a tuning fork and being pulled around the set, so it was a lot of paint-out for the tuning fork and all of the gear on it. It’s suitable for waist-up shots. Tenoch just had the pair of shorts, which means there’s not much to hide, and when doing extensive paint-out on skin, you can end up with large patches that can be easily seen.”
By IAN FAILES
Several sequences featuring the Giganotosaurus in Jurassic World Dominion made use of a animatronic head section on set. (Image courtesy of Universal Pictures and ILM)
For final shots, ILM would often retain the entire animatronic head of the dinosaur and add in the rest of the dinosaur body. (Image courtesy of Universal Pictures and ILM)
How do you put yourself into the shoes, or feet, of a Giganotosaurus? What about an advanced chimpanzee or a bipedal hippo god? And how do you tackle a curious baby tree-like humanoid? These are all computer-generated characters with very different personalities featured in films and shows released in 2022, and ones that needed to be brought to life in part by teams of animators. Here, animation heads leading the charge at ILM, Wētā FX, Framestore and Luma Pictures share how their particular creature was crafted and what they had to do to find the essence of that character.
When your antagonist is a Giganotosaurus
When Jurassic World Dominion Animation Supervisor Jance Rubinchik was discussing with director Colin Trevorrow how the film’s dinosaurs would be brought to the screen, he reflected on how the animals in the first Jurassic Park “weren’t villains, they were just animals. For example, the T-Rex is just curious about the jeep, and he’s flipping it over, stepping on it and biting pieces off of it. He’s not trying to kill the kids. I said to Colin, ‘Can we go back to our main dinosaur – the Giganotosaurus – just being an animal?’ Let’s explore the Giga being an animal and not just being a monster for monster’s sake. It was more naturalistic.”
With Trevorrow’s approval for this approach, Rubinchik began work on the film embedded with the previs and postvis teams at Proof Inc., while character designs also continued. This work fed both to the animatronic builds by John Nolan Studio and Industrial Light & Magic’s CG Giganotosaurus. “Early on, we did lots of walk cycles, run cycles and behavior tests. I personally did tests where the Giga wandered out from in between some trees and was shaking its head and snorting and looking around.”
Another aspect of the Giganotosaurus was that ILM would often be adding to a practical/animatronic head section with the remainder of the dinosaur in CG. For Rubinchik, it meant that the overall Giga performance was also heavily influenced by what could be achieved on set. Comments Rubinchik, “What I didn’t want to have were these practical dinosaurs that tend to be a little slower moving and restricted, simply from the fact they are massive hydraulic machines, that then intercut with fast and agile CG dinosaurs. It really screams, ‘This is what is CG and this is what’s practical.’
A full-motion CG Giganotosaurus crafted by ILM gives chase. (Image courtesy of Universal Pictures and ILM)
Animation Supervisor Jance Rubinchik had a hand in ensuring that the movement of animatronic dinosaurs made by John Nolan Studio, as shown here, were matched by their CG counterparts. (Image courtesy of Universal Pictures and ILM)
“Indeed,” Rubinchik adds, “sometimes as animators, you have all these controls and you want to use every single control that you have. You want to get as much overlap and jiggle and bounce and follow through as you can because we’re animators and that’s the fun of animating. But having something that introduced restraint for us, which was the practical on-set dinosaurs, meant we were more careful and subtler in our CG animation. There’s also a lot of fun and unexpected things that happen with the actual animatronics. It might get some shakes or twitches, and that stuff was great. We really added that stuff into Giga wherever we could.”
For Pogo shots in season 3 of The Umbrella Academy, on-set plates featured actor Ken Hall. (Image courtesy of Netflix and Wētā FX)
Voice performance for Pogo was provided by Adam Godley (right), while Wētā FX animators also contributed additional performance capture. (Image courtesy of Net-flix and Wētā FX)
The final Pogo shot. (Image courtesy of Netflix and Wētā FX)
“What I didn’t want to have were these practical dinosaurs that tend to be a little slower moving and restricted, simply from the fact they are massive hydraulic machines, that then intercut with fast and agile CG dinosaurs. It really screams, ‘This is what is CG and this is what’s practical.’ … But having something that introduced some restraint for us, which was the practical on-set dinosaurs, meant we were more careful and subtler in our CG animation. There’s also a lot of fun and unexpected things that happen with the actual animatronics. It might get some shakes or twitches, and that stuff was great. We really added that stuff into Giga wherever we could.”
—Jance Rubinchik, Animation Director, MPC
This extended even to the point of replicating the animatronic joint placements from the John Nolan Studio creatures into ILM’s CG versions. “All of the pivots for the neck, the head, the torso and the jaw were in the exact same place as they were in the CG puppet,” Rubinchik outlines. “It meant they would pivot from the same place. I was so happy with how that sequence turned out with all the unexpected little ticks and movements that informed what we did.”
Pogo reimagined
The advanced chimpanzee Pogo is a CG character viewers greeted in Seasons 1 and 2 of Netflix’s The Umbrella Academy as an assistant to Sir Reginald Hargreeves, and as a baby chimp. The most recent Season 3 of the show sees Pogo appear in an alternative timeline as a ‘cooler’ version of the character who even becomes a biker and tattoo artist. Wētā FX created each incarnation of Pogo, which drew upon the voice of Adam Godley, the on-set performance of Ken Hall and other stunt performers and stand-ins to make the final creature.
Having ‘lived’ with Pogo in his older, more frail form in the past seasons, Wētā FX Animation Supervisor Aidan Martin and his team now had the chance to work on a character who was capable of a lot more physically, including Kung Fu. “All of a sudden, Pogo’s been in the gym. He’s juiced up. He’s doubled his shoulder muscle mass and his arms are a lot bigger, so the way that he carries himself is completely different. His attitude has changed, too. He’s more gnarled and he’s a lot more jaded about the world,” Martin says.
From an animation point of view, Wētā FX animators took that new physicality into the performance and reflected it in postures and movements. “It was even things like the way he looks at somebody now,” Martin explains. “Early on in Season 1, when he looks at people, he’s very sincere. He was like a loving grandfather. Now, he’s a bit fed up with it all and he’s not looking at you with good intentions. He thinks you’re an idiot and he doesn’t have time for it. That’s where he’s coming from behind the mask.”
Pogo is a grittier character in this latest season, even working as a tattoo artist. (Im-age courtesy of Netflix and Wētā FX)
“All of a sudden, Pogo’s been in the gym. He’s juiced up. He’s doubled his shoulder muscle mass and his arms are a lot bigger, so the way that he carries himself is completely different. His attitude has changed, too. He’s more gnarled and he’s a lot more jaded about the world.”
—Aidan Martin, Animation Supervisor, Wētā FX
One of the VFX studio’s toughest tasks on this new Pogo remained the character’s eyes. “Eyeline is everything, especially with chimps,” says Martin, who also had experience on the Planet of the Apes films at Wētā FX. “When you’re trying to do a more anthropomorphized performance, chimps with their eyelines and brows do not work very well compared to humans because their eyes are just so far back and their brows sit out so far. For example, as soon as you have the head tilt down and then try to make them look up, you can lose their eyes completely. Balancing the eyeline and the head angle is really difficult, especially on chimps.”
“Even once you’ve got that working, getting the mouth shapes to read properly is also tricky,” Martin continues. “There are some really tricky shapes, like a ‘V’ and an ‘F,’ that are incredibly hard on a chimp versus a human. Their mouths are almost twice as wide as our mouths. Humans look really good when they’re talking softly, but getting a chimp to do that, it looks like they’re either just mumbling or they get the coconut mouth, like two halves clacking together, and everything’s just too big. We used traditional animation techniques here, basically a sheet of phoneme expressions for Pogo’s mouth.”
Going hyper (or hippo) realistic
Finding the performance for a CG-animated character often happens very early on in a production, even before any live action is shot. In the case of the Marvel Studios series Moon Knight’s slightly awkward hippo god Taweret, it began when Framestore was tasked with translating the casting audition of voice and on-set performer Antonia Salib into a piece of test animation.
Actor Antonia Salib performs the role of hippo god Taweret on a bluescreen set. (Image courtesy of Marvel and Framestore)
Final shot of Taweret by Framestore. (Image courtesy of Netflix and Wētā FX)
“The Production Visual Effects Supervisor, Sean Andrew Faden, asked us to put something together as if it was Taweret auditioning for the role,” relates Framestore Animation Supervisor Chris Hurtt. “We made this classic blooper-like demo where we cut it up and had the beeps and even a set with a boom mic. We would match to Antonia’s performance with keyframe animation just to find the right tone. We would later have to go from her height to an eight- or nine-foot-tall hippo, which changed things, but it was a great start.”
Salib wore an extender stick during the shoot (here with Oscar Isaac) to reach the appropriate height of Taweret. (Image courtesy of Marvel and Framestore)
Framestore had to solve both a hippo look in bipedal form and the realistic motion of hair and costume for the final character. (Image courtesy of Marvel and Frame-store)
“We looked at a lot of reference of real hippos and asked ourselves, ‘What can we take from the face so that this doesn’t just feel like it’s only moving like a human face that’s in a hippo shape?’ We found there were these large fat sacks in the corners that we could move, and it made everything feel a little more hippo-y and not so human-y. Probably the biggest challenge on her was getting Taweret to go from a human to a hippo.”
—Chris Hurtt, Animation Supervisor, Framestore
During filming of the actual episode scenes, Salib would perform Taweret in costume with the other actors and with an extender stick and ball markers to represent the real height of the character. As Hurtt describes, Framestore took that as reference and looked to find the right kind of ‘hippoisms’ on Salib. “We looked at a lot of reference of real hippos and asked ourselves, ‘What can we take from the face so that this doesn’t just feel like it’s only moving like a human face that’s in a hippo shape?’ We found there were these large fat sacks in the corners that we could move, and it made everything feel a little more hippo-y and not so human-y.”
“Probably the biggest challenge on her was getting Taweret to go from a human to a hippo,” adds Hurtt, who also praises the Framestore modeling, rigging and texturing teams in building the character. “The main thing for animation was that we had to observe what the muscles and the FACS shapes were doing on Antonia, and then map those to the character. Still, you’re trying to hit key expressions without it looking too cartoony.”
To help realize the motion of Taweret’s face shapes in the most believable manner possible, Framestore’s animators relied on an in-house machine learning tool. “The tool does a dynamic simulation like you would with, say, hair, but instead it would drive those face shapes,” Hurtt notes. “It’s not actually super-noticeable, but it’s one of those things if you didn’t have there, particularly with such a huge character, she would’ve felt very much like paper-mâché when she turned her head.”
The enduring, endearing allure of Groot
The Marvel Studios Guardians of the Galaxy films have borne several CG-animated characters; one of the most beloved being Baby Groot. He now stars in his own series of animated shorts called I Am Groot, directed by Kirsten Lepore, with visual effects and animation by Luma Pictures. The fully CG shorts started with a script and boarding process driven by Lepore, according to Luma Pictures Animation Director Raphael Pimentel.
Luma Pictures Animation Director Raphael Pimentel donned an Xsens suit (and Ba-by Groot mask) for motion capture reference at Luma Pictures during the making of I Am Groot. (Image courtesy of Luma Pictures)
The behavior settled on for Baby Groot, which had been featured in previous Marvel projects, was always ‘endearing.’ (Image courtesy of Marvel and Luma Pictures)
“There were scripts early on showing what the stories were going to be about. These quickly transitioned into boards. Then, Kirsten would provide the boards to us with sound. She would put them to music as well, which was important to get the vibe. These would then be turned over to us as an animatic of those boards with the timing and sound that Kirsten envisioned, which was pretty spot-on to the final result.”
Baby Groot’s mud bath in one of the shorts required the extensive cooperation be-tween the animation and FX teams at Luma Pictures. (Image courtesy of Marvel and Luma Pictures)
Baby Groot still delivers only one line: “I am Groot.” (Image courtesy of Marvel and Luma Pictures)
In terms of finding the ideal style of character animation for Groot in the shorts, Luma Pictures shot motion capture as reference for its animators, which was used in conjunction with video reference that Lepore also provided, and vid-ref shot by the animators themselves. The motion capture mainly took the form of Pimentel performing in an Xsens suit. “We went to Luma and identified the key shots that we wanted to do for every episode,” Pimentel recalls. “We would do one episode each day. As we were going through those key shots, we ended up shooting mocap for everything. Kirsten was there telling me the emotions that she wanted Groot to be feeling at that specific point in time. And we said, ‘Let’s keep going, let’s keep going.’ Next thing you know, we actually shot mocap for everything to provide reference for the animators.”
In one of the shorts, “Groot Takes a Bath,” a mud bath results in the growth of many leaves on the character, which he soon finds ways to groom in different styles. This necessitated a close collaboration between animation and effects at Luma. “That was a technical challenge for us,” Pimentel discusses. “In order for Kirsten to see how the leaves were behaving, she would usually have to wait until the effects pass. We built an animation rig that was very robust that would get the look as close to final as possible through animation.”
The final behavior settled on for Baby Groot in the shorts was “endearing,” Pimentel notes. Despite Groot’s temper tantrums and ups and downs, he was still kept sweet at all times. From an animation standpoint, that meant ensuring the character’s body and facial performances stayed within limits. “It’s easy to start dialing in the brows to be angry, but we had to keep the brows soft at all times. And then his eyes are always wide to the world. Regardless of what’s happening to him, his eyes are always wide to the world, much like a kid is.”
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.