

2016 VES BayArea Summit
View Full Event Page for details
By CHRIS McGOWAN
Images courtesy of Amazon MGM Studios and Sony Pictures Television Studios.
Emma can go tiny or giant. In this scene, big Emma fills an entire swimming pool by herself.
The satirical superhero show The Boys is an unabashedly raunchy and gory streaming hit that has a fourth season on the way. It has spun off another series, Gen V, which doubles down on the outrageous elements of The Boys and is set in the same fictional reality – a present-day world in which some individuals have acquired superpowers from being given “compound V” as children and infants by the Vought Corporation. And for super-powered students who want to learn how to be superheroes, there is Vought’s Godolkin University, where Gen V focuses on a group of hard-partying, sex-seeking students who – while obsessing over their Godolkin rankings – find themselves embroiled in a deadly conspiracy.
Foul-mouth Soldier Boy is struck by lightning in a cameo appearance in Gen V. The spin-off series of The Boys features around 1,350 VFX shots.
The series – from Amazon MGM Studios and Sony Pictures Television Studios – was developed by Craig Rosenberg, Evan Goldberg and Eric Kripke. The cast includes Jaz Sinclair (as Marie Moreau), Chance Perdomo (Andre Anderson), Lizze Broadway (Emma Meyer), Maddie Phillips (Cate Dunlap), Asa Germann (Sam Riordan), Clancy Brown (Richard “Brink” Brinkerhoff) and Patrick Schwarzenegger (Golden Boy). To flesh out the varying superpowers of Gen V’s young superheroes, Visual Effects Supervisor Karen Heston called upon super-talented visual effects artists from across the globe.
“The stunt and fight team coordinated great choreographic elaborate fight scenes that involved wire work to give the extra oomph. All of that work is practical, and the stunt team did all the flying around in tandem with our talented and adventurous actors and actresses!”
—Karen Heston, Visual Effects Supervisor
Ghost VFX created movements of the eye glasses and hat to make the invisible Maverick come alive.
“We didn’t have one primary vendor for the show but rather paired a vendor per superpower,” Heston says. “So, Golden Boy [who can set his body on fire] was led by DNEG, Marie’s blood powers were with Rocket Science and Tiny Emma was split between Zoic and Pixomondo.”
“[Sam’s hallucination of battling Muppet-like puppets] was practical and in tandem with knowing that VFX would remove the wires, so they would be free to express themselves to make the puppets act out the scene. VFX had their back to ensure they were taken care of, so they had the freedom to do what they needed. We supported that scene only by adding more glitter [the puppets’ “blood”] to what was already there, because in the world of The Boys [including Gen V] you can never have enough blood, even if it is glitter blood!”
—Karen Heston, Visual Effects Supervisor
Heston continues, “Luma took over some hard-surface vehicle shots that play into Andre’s powers, such as the ambulance sequence and the helicopter sequence. RISE was a part of Cate’s Dream Sequence – another creatively fun challenge to tackle with the disintegrating house and trees – emulating Cate’s deteriorating mental state. RISE did a great job!” In addition, she praises Ghost VFX in Copenhagen for giving a personality to Maverick, who has the power of invisibility.
Size-adjusting Emma, here with roommate Marie, was featured in different shots from normal size to tiny to gigantic. Zoic Studios and Pixomondo shared the VFX.
Gen V has some 1,350 VFX shots. Crafty Apes, Ingenuity Studios and Playfight VFX were among the other contributing vendors. A volume stage was not used, and greenscreen work was employed selectively as a last resort. “We leaned into as much practical as possible. The set design was fantastic!” Heston remarks. For a sex scene with a miniaturized Emma interacting with a normal-sized lover and a scene of her killing a guard by entering his ear and head, outlandish props and prosthetics were required. “Production built a giant ear, for example, for Emma, and even a giant [five-foot] penis!” Heston adds.
Throughout the series, there were many bodies and/or objects hurled through the air that required stunts and special effects. “Oh yeah,” Heston enthuses. “The stunt and fight team coordinated great choreographic elaborate fight scenes that involved wire work to give the extra oomph. All of that work is practical, and the stunt team did all the flying around in tandem with our talented and adventurous actors and actresses!”
Sam’s hallucination of battling Muppet-like puppets “was practical and in tandem with knowing that VFX would remove the wires, so they would be free to express themselves to make the puppets act out the scene,” Heston explains. “VFX had their back to ensure they were taken care of, so they had the freedom to do what they needed. We supported that scene only by adding more glitter [the puppets’ “blood”] to what was already there, because in the world of The Boys [including Gen V] you can never have enough blood, even if it is glitter blood!”
Finding the look and feel of Marie’s blood powers, which included wielding her own blood as a deadly weapon, was driven by her personal journey and character arc established from The Boys. Rocket Science VFX developed Marie’s blood powers.
Marie can weaponize her own blood, such as by wielding it as a deadly tendril. Rocket Science VFX’s work on Marie’s blood powers “went through many different iterations and creative arcs,” Heston notes. “After much exploration, we went with what is always most successful – leaning into the story. For VFX artists in general, and The Boys universe specifically, story is the main driver for the need for enhanced visual effects. So, with Marie, finding the look and feel of her blood powers was driven by her story. Her personal journey and character arc were emulated in how her blood powers throughout the show come about.”
In Episode 101, as Marie is still learning her powers and getting her footing in the show, “we see her show up in the messy blood splatter that flies off the tendrils,” says Heston, who adds, “Later, as she finds more confidence and more footing in her journey, the [tendrils] become cleaner and more powerful, leading to our final sequence where she can yield the blood shards. So, we wanted them to have their arc as she finds herself. FX and CG artists led by Adam Jewett at Rocket Science were directed to pull this work off for Marie, and it turned into an eerily beautiful effect once we were done with it.”
“We see [Marie] show up in the messy blood splatter that flies off the tendrils. Later, as she finds more confidence and more footing in her journey, the [tendrils] become cleaner and more powerful, leading to our final sequence where she can yield the blood shards. So, we wanted them to have their arc as she finds herself. FX and CG artists led by Adam Jewett at Rocket Science were directed to pull this work off for Marie, and it turned into an eerily beautiful effect once we were done with it.”
—Karen Heston, Visual Effects Supervisor
“Like Marie, Golden Boy had his own creative journey to land his powers. For the initial intro to his powers, he is fighting confidently with Incredible Steve [played by Warren Scherer], so for that fire we wanted it to be a little more upbeat, and we laid into the bright solar flares and played that up,” Heston explains. “Then, as he becomes unstable and is coming off the killing of Brink and about to kill himself, his flames become more chaotic, and we introduce the contrast of black smoke more in these following sequences.”
Heston continues, “The FX sims in the hallway and leading up to his explosion were all by design supporting his character arc and his story. This was in collaboration with the creative direction of Andrew Simmonds [VFX Supervisor] at DNEG. He was my partner in crime at DNEG to get this high-level work looking not only consistent, but rad! It was Stephan Fleet’s and [my] goal to deliver a fresh new take to the ‘fire guy’ we have all seen before in the likes of Extremis and Flash, and I think we did that!” Fleet was VFX Supervisor for The Boys and Co-Producer of Gen V.
To introduce Golden Boy, who can set his body on fire, bright solar flares were used to evoke an upbeat entrance. As he becomes unstable, his flames become more chaotic. DNEG created the fire effects.
The invisibility-powered Maverick (Nicholas Hamilton) was an entertaining character to work on, according to Heston. Ghost VFX “took that on as their own and had their animators put in the extra sauce” that made Maverick into an individual, Heston says. “Being that Maverick is just a hat and glasses, I decided to add more personality to the animation passes. We decided to play up the facial gestures such as scrunching your nose or subtle glasses adjustments, wiggling to the hat, etc.” These typically subtle gestures were given a lot more movement than usual so that one can ‘see’ that it’s Maverick.
Emma, with her shrinking/growing powers, was featured in many different types of shots, from normal in size to tiny to gigantic. “The Emma penis scene [where she is diminutive] was one of those moments where you’re both amazed and chuckling at the same time. Pixomondo absolutely nailed it!” Heston says.
Production built a giant ear for Emma to enter a guard’s head and exit out the other side.
“The main ingredient that made tiny Emma shine and feel “big” early on were some lessons learned from The Boys,” Heston reveals. “This was a particular instance where it was beneficial to have Stephen Fleet come on to shed light on his experience on keeping the tone of The Boys universe in our show. Any The Boys fan can guess I am talking about lessons from the beloved Termite [Brett Geddes]. So right away, we knew some key VFX ingredients would be the need for some dust particulates as well as some defocus used selectively as well.” Some other specific The Boys tricks of the trade that made Termite shine in The Boys were applied to both Tiny and Big Emma.
“Like Marie, Golden Boy had his own creative journey to land his powers. For the initial intro to his powers, he is fighting confidently with Incredible Steve [played by Warren Scherer], so for that fire we wanted it to be a little more upbeat, and we laid into the bright solar flares and played that up. Then, as he becomes unstable and is coming off the killing of Brink and about to kill himself, his flames become more chaotic, and we introduce the contrast of black smoke more in these following sequences.”
—Karen Heston, Visual Effects Supervisor
It was a big job to lead a team of artists across many vendors around the world, according to Heston. “That challenge was met with grace amongst our VFX producing team, including Sean Tompkins and Rebecca Burnett, who helped to drive the series home. It is difficult to unify such a large team across many time zones, but I couldn’t have asked for better partners to keep the positive vibes high during crunch time. I couldn’t have done it without them and all of the VFX artists and talent in our rather big Emma-sized Gen V VFX team.”
On her experience with the series, which has been renewed for a second season, Heston says, “When I got the call for Gen V, I was already a huge fan of The Boys. Let’s just say it’s not a show you watch with your squeamish friends, right? To sum it all up, working on Gen V was a whirlwind of creativity, challenges and much fun.”
By Naomi Goldman
The Visual Effects Society is proud to announce the release of the highly anticipated The VES Handbook of Virtual Production – the most comprehensive guide to virtual production techniques and best practices available.
Edited by VFX Producer Susan Zwerman, VES, and renowned Visual Effects Supervisor Jeffrey A. Okun, VES, The VES Handbook of Virtual Production features real-world expertise gleaned from 82 experts in the world of Virtual Production in areas including VR, AR, MR, and XR technology, as well as detailed sections on interactive games, full animation and Unreal and Unity to provide realtime in camera VFX. Additionally, the authors share their best methods, tips, tricks, and shortcuts developed as hands-on practitioners.
In announcing the release of the book, VES Chair Lisa Cooke said, “We are excited to bring forth the VES Handbook of Virtual Production, which compiled the latest, industry-standard technologies and workflows for the ever-evolving, fast-paced world of virtual production. We embrace the responsibility and opportunity to provide ongoing education for VFX practitioners, producers and filmmakers, and are proud to offer this invaluable resource on our art and craft.”
“This is a must-read resource for all production professionals – no matter their craft – who are looking to gain essential knowledge in virtual production,” said Zwerman. “The writers have combined wisdom and practicality to produce an extraordinary book that covers all of the essential VP techniques and solutions, from pre-production through filming in LED volumes to post-production.”
“The VES Handbook of Virtual Production is incredibly timely as there has been a seismic shift in how visual effects are being created and there is no informational handbook available on what artists, teachers, students and other VFX professionals need to learn,” said Okun. This handbook on Virtual Production covers essential techniques and solutions for all practitioners, making it THE guide that demystifies virtual production so that more producers, art directors and filmmakers can navigate this new technology.”
The comprehensive VES Handbook of Virtual Production covers topics including: Visualization; VAD (Virtual Art Department); Volumetric Capture; How to Capture Environments; LED Stage Setup; LED Display; Software/Hardware for VP; Cameras and Camera Tracking; Color Management; External Lighting for the Volume; Challenges and Limitations of Shooting in a Volume; and a Virtual Production Glossary.
The VES Handbook of Virtual Production is available for purchase at Routledge.com at https://bit.ly/VES_VPHandbook or on Amazon.com at https://www.amazon.co.uk/VES-Handbook-Virtual-Production/dp/1032432667/
By TREVOR HOGG
Images courtesy of Framestore and AppleTV+.
Two suns and fog were key elements to get the proper lighting and make the shots readable for the space battle that takes place over Terminus.
For the second season of the AppleTV+ series Foundation, Framestore was responsible for 320 shots shared between facilities in Montreal and Mumbai as well as contributions from Pre-Production Services (FPS) based in London. The work ranged from creating the planet of Oona, creatures such as Stone Eaters, Moonshrikes and a Bishop’s Claw and a space battle over Terminus.
Bishop’s Claws like Beki were inspired by cheetahs that are always posed to strike or chase.
“Having Framestore Pre-Production Services helped a lot to start the project because they already had discussions with the client, had all of the art department references and had already done a build of scenes and assets,” states Laureline Silan, VFX Supervisor at Framestore. “There was something to start with but not anything we could reuse right out of the box as they had to be improved a lot. Camera movements were reused, especially for the space battle and the Stone Eaters chasing Beggar’s Lament. Alternations had to be made because the assets and environment were changed. However, most of it is quite close to what was developed before.”
The Bishop’s Claw from Season 1 had to be enhanced into a hero asset for Season 2.
Numerous references were provided by Chris MacLean, Production VFX Supervisor. “First, there were some concepts that Chris had already done with the art department in London, and he showed us some references for how he wanted to go,” Silan remarks. “Then we asked our visualization department in Montreal to help with that so Chris did not have to wait too long to see the asset. It was a quick and easy back and forth. For the environment of the Monuments of Industry, we received the plates, which needed to be extended and have a huge monument that is about two kilometers high. We had some 2D painting done on top of the plate so the environment team could take that as a base.” Conveying the proper size and scale was hard. “When you only have deserts and dunes and no vegetation, it’s complicated to feel the size in the distance. For the dunes, we watched a lot of desert references. The desert in California had a unique landscape where you could put the camera. We came to realize it was because of the change of color, shadows and the way that the light alters slightly that you could tell the size in the distance,” Silan observes.
Depicting different colors in the sand was critical to being able to create a sense of distance in shots.
“If you look closely at the animation, the Stone Eater is slowly moving its legs and then bam! it’s hitting the ground in a more accelerated way, but they never cover a long distance. That makes them scary. It has a rolling mechanism with teeth in the center of the body. … Chris MacLean [Production VFX Supervisor] and Mike Enriquez [Production VFX Supervisor] were keen to have a mechanism that made sense. You actually have pistons, and when the Stone Eaters move the mechanism moves as well and everything has a purpose. The four red dots are lasers to destroy the stones.”
—Laureline Silan, VFX Supervisor, Framestore
The design of the Stone Easters was influenced by crabs.
There were close-up shots of the monuments. “It was all about the number of small details that we were piling up on the textures,” Silan explains. “There was also a little bit of DMP to have some randomness.” Some parts of the monuments have eroded. “We played a lot with displacement for the rocks, which is not the same everywhere,” she adds. “It is based on the type of rock formation that you can find in the Mojave Desert. Sometimes those rocks are more reddish while other times paler and decolorated. When you have them in the sun, there is another type of rock, erosion and color than the ones that are always in the shadows. We tried to mimic that within our huge environment. For the monument we added some extra passes of texture to some specific areas like the hands, eyes or the bottom of it.”An effect pass of sand accumulation was helpful in making the environment believable. “Sometimes it was done by environments or effects because we placed some floating sand in there. Because of the wind, the sand would be blowing around,” Silan adds. The shots of the monuments with Stone Eaters were full CG. “Whenever the characters were in it, we used the drone plates that were provided. Usually. the junction was done in DMP. Sometimes you have elements of sand floating in between the plates and the CG extension. Every time you are close up it’s mostly a full CG shot.”
Every mechanism in the Stone Eaters has a purpose.
“For the Moonshrike stampede, we had animation do a lot of different cycles, like them avoiding each other or the head going up above the herd. …We found this compromise where the center of the stampede would follow their one and only purpose, which is to go to the moon, but the ones on the sides would go out of the stampede to add discrepancy in the shot. Chris and Mike didn’t want the wings to be too visible until it flies because they wanted that surprise effect. The wings had to be folded as the Moonshrikes were running. We did two different models and built a rig that allowed us to transition from one to the other when they’re flying away.”
—Laureline Silan, VFX Supervisor, Framestore
An effect pass of sand accumulation was helpful in making the Monuments of Industry believable.
Motion studies were done for the Stone Eaters, which are crab-like machines. “If you look closely at the animation, the Stone Eater is slowly moving its legs and then bam! it’s hitting the ground in a more accelerated way, but they never cover a long distance,” Silan explains. “That makes them scary. It has a rolling mechanism with teeth in the center of the body. You can actually understand that when they’re pulling the sand away. There’s a net that is never deployed, which was supposed to catch all of the pebbles that are being thrown up. Chris MacLean and Mike Enriquez [Production VFX Supervisor] were keen to have a mechanism that made sense. You actually have pistons, and when the Stone Eaters move the mechanism moves as well, and everything has a purpose. The four red dots are lasers to destroy the stones.” The Stone Eaters are made from a hard iridescent metal. “There were a couple of extra texture passes for scratches, damage and decolorization,” Silan notes. Moonshrikes, which inhabit the planet of Helicon, are a cross between a rhinoceros and bird. “For the Moonshrike stampede, we had animation do a lot of different cycles, like them avoiding each other or the head going up above the herd,” Silan describes, adding that a buffalo stampede was referenced. “That was boring because the buffalo were all running in the same direction. We found this compromise where the center of the stampede would follow their one and only purpose, which is to go to the moon, but the ones on the sides would go out of the stampede to add discrepancy in the shot. Chris and Mike didn’t want the wings to be too visible until it flies because they wanted that surprise effect. The wings had to be folded as the Moonshrikes were running. We did two different models and built a rig that allowed us to transition from one to the other when they’re flying away.”
Erosion was not treated uniformly for the Monuments of Industry.
“Beki was one of my favorite characters that we had to do on Season 2. … The request from the client was to enhance the asset so it could become a hero asset. That’s how the whole Beki personality started. We needed the audience to bond with her, which is complicated by the fact she has 10 eyes and you don’t want to meet her in a dark alley! We had to think about all of the aspects we needed to have and change to make sure that we can feel empathy for her. Luckily, she never looks at the camera. Beki is a well-behaved actor!”
—Laureline Silan, VFX Supervisor, Framestore
A buffalo stampede was referenced for the charging pack of Moonshrikes.
Getting the audience to empathize with the Bishop’s Claw known as Beki was a fascinating task. “Beki was one of my favorite characters that we had to do on Season 2,” Silan reveals. “We received the assets from Season 1. You see a wide shot of a Bishop’s Claw in the dark. The request from the client was to enhance the asset so it could become a hero asset. That’s how the whole Beki personality started. We needed the audience to bond with her, which is complicated by the fact she has 10 eyes and you don’t want to meet her in a dark alley! We had to think about all of the aspects we needed to have and change to make sure that we can feel empathy for her. Luckily, she never looks at the camera. Beki is a well-behaved actor! Our animation supervisor suggested we do some changes in the eyes because the ones we received were all of the same size, so it was complicated to create some kind of eye behavior. We also created an eyelid for the top and bottom. Once you have that and some motion in there and flexibility in the shape of eyes, it actually worked. The character has a heart, brain and exists.” Beki is based on a cheetah. “She is a big cat and often has this pose where you feel that she is going to jump at or chase someone.” On set was a proxy head which was moved around. When Brother Constant is riding Beki and you can see the whole body, they had a rig. “We had to do a body track. The feet had to touch the ground, and it had to look like a cheetah. That was a big challenge for animation but they managed.”
The wings of Moonshrikes were hidden to surprise the audience that they could actually fly.
“The distances in space are huge, but the good thing is that we added some fog, which is not possible [in space], but it’s science fiction, so why not! We had the camera from FPS that we used, and the client asked us to add the dogfight in the back; that was something that animation had to choreograph. … The Empire fighters fired orange bursts of energy while the Whisper-ships shoot blue lasers. That’s how in the motion blur you could differentiate the two sides of the battle.”
—Laureline Silan, VFX Supervisor, Framestore
The Empire ships fire bursts of orange energy while the Whisper-ships shoot blue lasers.
Another massive environment was outer space. “The distances in space are huge, but the good thing is that we added some fog, which is not possible [in space], but it’s science fiction, so why not!” Silan laughs. “We had the camera from FPS that we used, and the client asked us to add the dogfight in the back; that was something that animation had to choreograph.” The opposing forces were distinguished by giving them distinctly colored lasers. “The Empire fighters fired orange bursts of energy while the Whisper-ships shoot blue lasers. That’s how in the motion blur you could differentiate the two sides of the battle,” Silan notes. Lighting was provided by two suns. “The problem is that you have Terminus in the background that still needed to be readable as well as the Aegis ship, a huge blockade and the spaceship battle in front. We always had a sun covering most of Terminus, but still keeping some parts in the shadows. In the space battle, nothing is really contrasted. Everything had light. There was a second sun that was never completely far away in terms of rotation. We were not doing a complete 180. It’s a slightly different angle and position, but it was taking care of lighting the space battle. Then there is a fill light because we wanted to keep some parts of the planet and ships in the back and still have some details there. The cinematography was not going for 2001: A Space Odyssey, which was bright versus black. It’s more non-contrast. That’s why we have fog, as it helps us to read all of the elements.”
By TREVOR HOGG
Images courtesy of AppleTV+.
CG crowds had to be produced to create the impression that there are 10,000 inhabitants.
With the Earth rendered uninhabitable, 10,000 people seek refuge in a 144-story silo buried in the ground, which leads to questions arising that threaten the stability of the dystopian society. This is the premise for the AppleTV+ production Silo, which is based on the sci-fi Wool series by novelist Hugh Howey and has been adapted for the streaming service by Graham Yost (Justified). In order to get the necessary scope of the massive structure, big practical sets were constructed by Production Designer Gavin Bocquet (Jack the Giant Slayer) and expanded upon by utilizing bluescreen.
“There was a section that was bluescreen to allow ourselves extensions and the ability to differentiate the levels. One of the challenges was, ‘How do you avoid repetition and continuity issues?’ The entire set would be converted to bluescreen with the balconies and then we would be able to take over the build. None of those sets [for the other floors of the silo] were built because of the sheer scope and the time it takes to change the set. We also had a smaller replica of the main set to access into other floors.”
—Daniel Rauchwerger, Visual Effects Supervisor
Given the scope of the environments whether it be the interior of the silo or the outside world, bluescreen was unavoidable as only so much could be constructed practically.
MPC and Outpost VFX handled the bulk of the 2,300 visual effects shots along with Rodeo FX, DNEG, Zoic Studios and FuseFX “There was not much that we could have built completely to not need visual effects,” notes Visual Effects Supervisor Daniel Rauchwerger (F9: The Fast Saga). “It was quite an interesting thing with Mark Patten [Raised by Wolves] because it was a balance of what you do with your DP to get that ability to transition nicely between our bluescreen and CG worlds. Mark had a great part in designing the show and making it what it is.” A lot of the concept art was done in SketchUp by the art department. “We did not use the models because the tools that we use are quite different. Each vendor had its own modeling methodologies and tools. Gavin and I worked well together. It was an intriguing relationship because the production design and continuity of our world meant that a language had to be created that we both speak and understand. There was no holding back in the design of the silo.”
LED screens were utilized to project the images of the outside world for the cafeteria set.
The main set was dressed as generic apartments. It was not a complete circle. “There was a section that was bluescreen to allow ourselves extensions and the ability to differentiate the levels,” Rauchwerger explains. “One of the challenges was, ‘How do you avoid repetition and continuity issues?’ The entire set would be converted to bluescreen with the balconies and then we would be able to take over the build. None of those sets [for the other floors of the silo] were built because of the sheer scope and the time it takes to change the set. We also had a smaller replica of the main set to access into other floors. We had the access bridge and actual access into the cafeteria set so we could do those shots and use the massive screen at the end of it.”
Silo was shot using a new set of lenses developed by Caldwell based on Panavision anamorphics called Chameleons.
“I dreaded it, but I was also excited about the blackout because I like this moment where everything drops into complete darkness. It was a bit of a weird one. There was no reason for the Hollywood moonlight or ambience, but we shot it a brighter than what we were planning on using to allow us some range in the frame to do the amount of visual effects work that was needed.”
—Daniel Rauchwerger, Visual Effects Supervisor
Grayscale models and previs showing the level devoted to growing crops for the inhabitants living in the silo.
Virtual production was utilized for the screens in the cafeteria showcasing the devastated outside landscape. “Virtual production introduces numerous challenges that we usually keep for the final post-production,” Rauchwerger observes. “Because of the nature of the space and we were in such a dark contained environment with no windows or exterior lighting, we wanted the light from that screen to affect how the people were being lit. You don’t want to have bluescreen or greenscreen spill across everything as it would be quite unpleasing to see.” The screen ratio was built from five cameras. “When we get close enough to the screen and the actor had to be cleaning the lens, the hand completely covered the lens. so that’s where we had to cheat. The actor is on one camera that is comped on top of the five-camera array used for wider shots,” Rauchwerger notes.
Initially, there were going to be thousands of floating Chinese lanterns, but fewer turned out to be more dramatic.
A vertical section was physically constructed for the trash chute that spans the entire height of the silo. “Part of it had to be open to be able to play with the camera and movement,” Rauchwerger remarks. “We had to drop a CG air conditioner. It became a full CG shot because we couldn’t quite get that feeling of constant distance. There was a lot of mapping on this show.” Floating Chinese red lanterns emphasized the vastness of the human refuge. Rauchwerger comments, “That is a fun one because it started very differently from the final version; because when you draw a storyboard, design and work on it, you need to find the right balance in the number of lanterns and in the interactive lighting. It was big challenge on the technical level because of the crowd in terms of movement. We had loads of interactive lighting on set and cranes that we built with moving LEDs to create the feeling of traveling. We were always aiming to be subtle with it to leave room for an extension. It played well in the final sequence.”
Human characters provide a scale reference to the massive structure of the silo.
Darkness prevails in the silo during two blackouts. Rauchwerger reveals, “I dreaded it, but I was also excited about the blackout because I like this moment where everything drops into complete darkness. It was a bit of a weird one. There was no reason for the Hollywood moonlight or ambience, but we shot it a brighter than what we were planning on using to allow us some range in the frame to do the amount of visual effects work that was needed. It was always about what would feel natural for the electrical system or structure like that to go and what do you leave working? The first blackout was when they were dimming the lights down for the festival and then the full one, which had to be different as well. How do you do 90% blackout versus 100%? You can see that some floors have a delay so we can keep reading the texture on the sides of the silo without going to a point where you see nothing. Then we had to build CG people with flashlights to counter what we had done.”
The silo is buried 144 stories into the ground.
Aberrations, like the glitch that occurs when the hand of an exiled Juliette Nichols (Rebecca Ferguson) goes through a holographic image, had to appear accidental despite being intentional. “It was quite a process,” Rauchwerger admits. “Rodeo FX picked it up quite late in the show to get that look. The first thing that clicked was the design of it, which came quite fast, but then refinement of it is the slow process. We always talked about what would make the glitch still retain that illusion longer without feeling like it’s two dimensional. We decided to go with something that would almost treat that as an AR environment which has volume. How do you break that volume? Is it the distance from the home base that she is moving to versus the distance of the hand? What is the illusion? That’s where the boulder instead of the dead body came into play because we wanted to have everything happen as she moves between the AR coverup to the real-world underneath. What does it do and how does the system try to fix it? The hand moves through something, and the system constantly tries to recalculate how to make another part of a dead body into a rock.”
Critical to keeping the silo operational is the power generator.
“[W]e had to look quite far into the future and into the books as to what was the story behind what we are revealing. We had to be careful about what we were revealing. It was mapped out, built, and there are a lot of nice details that we have carefully placed in there. I hope that one day people will be able to look at it and go, ‘It all registers.’ I’m proud of that shot [the final reveal at the end of the show] because it leaves you with a big, ‘Ahhhhh.’”
—Daniel Rauchwerger, Visual Effects Supervisor
The outside world is depicted as one of apocalyptic devastation.
After all of the time spent in the confines of the silo, an emotional release occurs with the wide aerial shot of the world outside. “There was one plan for the shot, but we decided to offer something quite different,” Rauchwerger explains. “We knew what we wanted to show, and it’s all about the big final reveal at the end of the show. We tried to figure out how to reveal the world in an interesting way that feels natural and connected to what you can do with a helicopter. We worked with Rodeo FX on this one as well. First of all, we had to look quite far into the future and into the books as to what was the story behind what we are revealing. We had to be careful about what we were revealing. It was mapped out, built, and there are a lot of nice details that we have carefully placed in there. I hope that one day people will be able to look at it and go, ‘It all registers.’ I’m proud of that shot because it leaves you with a big, ‘Ahhhhh.’”
By TREVOR HOGG
Images courtesy of Sikelia Productions and Apple Studios.
For Martin Scorsese, the face, eyes and body language of Lily Gladstone were intrinsically correct for the role of Mollie Burkhart. (Photo: Melinda Sue Gordon)
Ill fortune may be the best way to describe the fate of the Osage Nation when oil was discovered on their Oklahoma reservation and, subsequently, they began to die under mysterious circumstances. The newly-formed Bureau of Investigation (predecessor to the FBI) discovered a murderous conspiracy masterminded by cattleman William Hale to gain control of the Osage headrights, then seize their wealth from the profits from the resources extracted from the land. The nefarious true story was the subject of the book Killers of the Flower Moon: The Osage Murders and the Birth of the FBI by David Grann and has been adapted into an epic historical crime drama by renowned filmmaker Martin Scorsese (Goodfellas) on the behalf of Appian Way, Sikelia Productions, Imperative Entertainment and Apple Studios. “It’s a picture that explores what love is, what it could be and what all of us are capable of,” Scorsese states. “One can become complicit without even realizing, and when do you realize, do you change?”
Characters and scenes drove the editorial process. “Whole scenes are interwoven, so ultimately it creates a whirlpool midway through or maybe an hour into the film,” Scorsese explains. “If the film is speaking to you, then you’re stuck in it, like the characters are stuck and can’t get out.” Even with the rising body count, there is restraint in depicting the violence. “Marty was conscious of the continuing pain that the Osage feel about this horrible time,” notes Thelma Schoonmaker, who established a life-long creative partnership with the filmmaker when she restored his student film while attending a six-week film course at NYU and has subsequently won Oscars for editing Raging Bull, The Aviator and The Departed. “Almost all of the killings are in a wide shot, and they worked incredibly well.” The unique narrative structure of Killers of the Flower Moon was partially caused by the script being originally much larger. “You’re being jerked from one scene or time frame to another, and that was the result of the cutting down, but also deliberate after awhile,” Schoonmaker reveals. “We realized that we could do that, and it pulls you along in the film.”
The oil geyser was achieved practically and inspired by an iconic cinematic moment in Giant, starring James Dean, Rock Hudson and Elizabeth Taylor
“What I was trying for mainly was giving a real impression of being out there on those prairies. And yet I found that we still needed visual effects to give a sense of scale and place that one can see in a shot where the car is going down the road at the beginning of the film. The camera booms up and you see the prairie on left and right, but then slowly oil rigs start to appear, and visual effects helped us to create a real sense of the oil encroaching on nature.”
—Martin Scorsese
Visual effects have become a more significant cinematic tool for Scorsese ever since depicting Potala Palace in Kundun and was central in being able to de-age characters in The Irishman. “We rely heavily on the visual effects editor because I will go into the room and say, ‘Can you take this person out of the shot?’ Or, ‘Can you remove or change the color of this?’” Schoonmaker remarks. “It’s wonderful to be able to do that and have it done for us quickly. Marty required a certain double image in one frame of two children’s faces in the first scene of the film and [VFX Editor] Red Charyszyn was able to create that beautifully, and that’s actually in the film now.” Even though an effort was made to shoot in the actual locations such as the doctor’s office of the Shoun brothers and the Masonic lodge where the meetings take place, visual effects were still needed. “The art department did a great job of restoring the actual towns that we were shooting in, but they could only go so far,” Charyszyn states. “We were going to be more involved because we were talking about putting oil derricks everywhere, making the town properly period, and so many cows!”
It was important to eliminate any indication of modernization, especially for the wide shots.
It was essential for Martin Scorsese to shoot in the locations where the actual historical events took place. (Photo: Melinda Sue Gordon)
The mandate to capture as much as possible in-camera led to a real train being brought in for principal photography.
Principal photography mainly took place in Pawhuska instead of Fairfax, Oklahoma, where only three to four blocks were art-directed and the rest were completely CG.
A practical oil derrick was built, scanned and replicated digitally to create a massive oil field that is encroaching nature.
“What I was trying for mainly was giving a real impression of being out there on those prairies,” Scorsese remarks. “And yet I found that we still needed visual effects to give a sense of scale and place that one can see in a shot where the car is going down the road at the beginning of the film. The camera booms up and you see the prairie on left and right, but then slowly oil rigs start to appear, and visual effects helped us to create a real sense of the oil encroaching on nature.” ILM was the sole vendor and responsible for approximately 700 shots, which were supervised by Pablo Helman, who previously worked on Silence, Rolling Thunder Revue and The Irishman. “Generally, we did the oil rigs, enhanced the train, and there were shots where we had hundreds and hundreds of cows. We tried to shoot cows, but the temperature was like 96 degrees, so at 8:30 a.m. you have 350 cows and by 11 a.m. you have 11. They all go to the shade! You can’t move them. That’s it. We shot mainly in Pawhuska for Fairfax. There were only three or four blocks that were art-directed, and the rest was completely CG.”
A practical oil derrick was built and scanned. “We went in doing the previs, shot the shot, then I took it in and did postvis on it,” Helman states. “I said, ‘What if at the beginning we see something but don’t know what it is, but it’s an oil rig.’ Then we start seeing all of the oil rigs. Marty said, ‘That’s great. Let’s do that.’ We went back and forth with placing a certain number of rigs because he wanted to reveal it little by little until the end.”
The inclination for cattle to wander off in search of better pastures before the camera began rolling contributed to them being expanded upon digitally.
A hat and blood were added digitally for the scene when the car crashes into the tree. “He didn’t have a hat and blood when we shot it. They edited the movie, and Marty said, ‘I don’t have any way to know who this character is, but he does wear a hat all of the time.’” The car was actually constructed from two different takes. “The first time, the hood broke. We did it again and it did the right thing, but Marty liked the performance of the first one, so we had to split it.”
Black-and-white newsreel footage transitions into color when Ernest Burkhart (Leonardo DiCaprio) arrives on the train at Osage Nation for the first time. “The last shot of an Osage pilot standing in front of a plane is actual newsreel footage from the family of the current Chief of the Osage, Chief Geoffrey Standing Bear,” Schoonmaker reveals. “We had to decide how much grain to then use as we go into the train from that actual footage.” Adjustments were made incrementally with the final version sent to ILM to copy exactly. “Marty was particular about how the saturation had to come up,” Charyszyn recalls. “But then also I thought it was going to be mathematically diametrical to the grain lessening, but he wanted the impact of the color coming in to reach you and be so subtle.”
Plate photography of the house explosion by Cinematographer Rodrigo Prieto and the final image produced by ILM. (Photo: Melinda Sue Gordon)
ILM was the sole vendor and responsible for approximately 700 shots.
The Bureau of Investigation gathers in an oil field to discuss the progress of their Osage murder investigation.
Some of the scenes take place in Washington, D.C., added by bluescreen.
“This is using visual effects as a tool to tell the story in a completely invisible way. The whole movie is a piece of art. There are no compromises.”
—Pablo Helman, VFX Supervisor
While driving home from the set in Oklahoma, Scorsese witnessed local farmers burning off their fields, which led to some surreal, hellish imagery appearing when it is revealed that Ernest Burkhart is poisoning his Osage wife, Mollie (Lilly Gladstone), under the orders of his uncle, William Hale (Robert De Niro). “We began to encounter areas that were burning all around us, and at a certain point it became like we were in the middle of a volcano,” Scorsese notes. “It’s almost like when you hear the term ‘fever dream’. What does that feel like when you’re having a fever of that kind? How do you see things? Mollie is in the fever, and so is Ernest.” Dancers were hired to create weird background figures. “The footage [by Cinematographer Rodrigo Prieto] was stunning already,” Schoonmaker states. “One guy almost looks like someone on the cross.” The fire was practically achieved by a newcomer to Scorsese’s inner circle. “We created a foreground fire, which was the lower one with the guys walking around it, and there are a bunch of little spot fires,” reveals Brandon K. McLaughlin, Special Effects Coordinator. “We produced that by burying 60 to 70 bars in the ground. It was 250 feet long. Then the background ones were bars laying on the ground because we were never going to go back that far. I was blown away by how awesome it came out.”
It was a learning curve for McLaughlin. “The opening sequence of the movie with the pool of bubbling oil – that happened in reality,” explains McLaughlin, who used a product from Blair Adhesives that is a combination of Methocel, water and food coloring as a nontoxic substitute for oil. “But Marty kept saying that he wanted this geyser to come up from the ground. Oil doesn’t do that. Everyone kept bumping on that because Marty had said he wanted to keep it as true to the story as possible from the get-go. I finally asked him, ‘Is it an artistic piece that you’re putting together or is this something you’re going to want to shoot and have physically come out of the ground, like we dig a hole, put a nozzle in, bury our line, and you see it come from the ground?’ When Marty told me that’s what he wanted, then it made perfect sense.” Giant, starring James Dean, Rock Hudson and Elizabeth Taylor, was a cinematic reference. “There is this wonderful scene where oil erupts from the ground and covers him [James Dean] with oil,” Scorsese states. “I start with what’s there, and if it isn’t there, what would I like to be there which will give it a natural appearance; then, if the scene calls for it, to introduce elements that can be somewhat unrealistic but appear to the mind as an image that you would observe.”
Executing the bank vault explosion was a blast for McLaughlin. “I got giddy because the script simply said, ‘Asa Kirby comes in, puts too much pyro in this vault door and it blows up. We had the liberty to run with it. The door was built out of steel and I used a rapid accelerator, which is a 4:1 pulley system with a pneumatic ram. You pull the shives apart and multiple how much pressure you need to pull ‘x’ amount at what weight. I wanted it to dance across the floor. We made some changes because there were a couple of things on the door that would break, so we made those materials stiffer and heavier. Then, behind the door we put all of the pyro, the fire, sparks and money. We also had a shotgun mortar with pyro and sand in it. You use it as a fist for the most part. You can’t see it because it happens so fast. For the money, we had drop baskets overhead out of camera. We hit the pyro events on the latches, the doors would open and the money would fall down.”
Originally, the owl, which is a symbol of impending death, was going to be digital. “But we found an owl that did what it was supposed to do,” Helman remarks. “Another thing that we did that was interesting and invisible was people getting sick. How do you do that? You have a three-and-a-half-hour movie, and you’re telling the story of somebody getting sicker and sicker and dying. It’s difficult to do with makeup because you’re always shooting out of continuity. You have to be careful. We’re doing takes and takes of stuff. It’s difficult to keep track of it. It is a lot easier to go with a base makeup and then go in post.” A combination of things was done digitally. “It’s getting under the eyes darker, being gaunt there, doing some warping or some 3D work where you take some swelling, or sometimes you make them swell more or sweat. We did some research as to what cyanide and poison does.” The book on Osage culture had to be altered digitally in order to be narratively relevant. “We didn’t have the right book, so a lot of the pages were replaced. There was a piece of art that we produced that was researched, period correct and had to bend appropriately. De Niro says to DiCaprio, ‘Can you spot the wolves?’ The drawing in the book didn’t have the wolves visible and Marty needed to see them, so we had to replace it digitally.”
With much being made of Scorsese’s two muses, Robert De Niro and Leonardo DiCaprio, sharing the screen together, the real star is Lily Gladstone, who plays Mollie. “She represents all of the Native American Nations in her phenomenal performance and dignity,” Schoonmaker declares. “I would say that the death of the mother and the ancestors coming to take her away is one of the most beautiful things that Marty has ever done. It’s so simple.” Helman points out that the general complaint about visual effects does not apply to Killers of the Flower Moon. “This is using visual effects as a tool to tell the story in a completely invisible way.
The whole movie is a piece of art. There are no compromises.” McLaughlin was pleased with the end result. “I thought Marty did a great job. I hope that people walk away from it going, ‘Oh my god! I can’t believe that this truly happened.’” Whenever Scorsese was in doubt, he went back to the relationship between Mollie and Ernest and stayed with them as long as possible. McLaughlin notes, “I love the scene at the table at the beginning when Leo and Lily have their first dinner together, which ends with the rainstorm. There is something in their faces, an electricity and warmth at the same time that is so sweet and moving. And then her teaching him how to sit still. ‘Just sit still and let the power of Wah-Kon-Tah’s storm float over us.’ That’s like saying, ‘Let’s live here in life.’”
The visual effects were instrumental in transporting viewers to Oklahoma of the early 1920s.
A favorite moment of Martin Scorsese’s occurs at the dinner table when Mollie teaches Ernest to be quiet and still during a thunderstorm.
Killers of the Flower Moon brings together the two major muses for Martin Scorsese, Robert De Niro as William Hale and Leonardo DiCaprio as Ernest Burkhart. (Photo: Melinda Sue Gordon)
Jesse Plemons portrays Tom White, who leads the Bureau of Investigation in its effort to solve the Osage murders, which were orchestrated by cattleman William Hale played by Robert De Niro.
By TREVOR HOGG
Images courtesy of Hulu and BlueBolt.
Questions answered during the look development of the icy lake were about how atmospheric the environment should be and what type of clouds should be in the sky.
Naughty behavior has never been so much fun to watch than in the Hulu historical satire The Great, which plays fast and loose with facts surrounding the marriage of Catherine the Great (Elle Fanning) and her attempts to kill her husband Emperor Peter III of Russia (Nicholas Hoult). The series created by Tony McNamara has completed its third season of 10 episodes with BlueBolt completing visual effects tasks ranging from CG butterflies to a frozen lake. There was an opportunity to be creative beyond being historically accurate given the irreverent attitude of the subject matter. “We had a freedom to push things more in terms of the amount of blood, which Tony McNamara reacts to quite well,” states David Scott, VFX Supervisor at BlueBolt. “However, it was important to Francesca Di Mottola, the Production Designer, that the architecture be of the period.”
“We took that photo scan without intending that Peter [Emperor Peter III of Russia] would be a CG character. The original idea for the shot was altered as the edit changed. How quickly he goes under was changing, so that forced us to go down a route of doing a CG horse. When Peter is underwater, we added a CG horse behind him. That went through a few rounds because it never felt right, and we didn’t want the horse to compete with Peter.”
—David Scott, VFX Supervisor, BlueBolt
The major new environment that BlueBolt was responsible for on Season 3 of The Great was the icy lake.
BlueBolt has been involved with The Great since the beginning. “There was definitely a carry-over effect from the previous seasons, like all of the backgrounds of Caserta Palace are a CG asset from Season 1, and the CG butterflies as well,” Scott notes. “They’re not allowed to fire the guns on set anymore, so there are no muzzle flashes. We have to add that in after the fact. The muzzle flashes are from Season 2. Then there is the new stuff like the icy lake environment.” Fewer CG butterflies had to be created compared to previous seasons. “I was trying to work out how to make the animation of the butterflies feel realistic because of the fluttering. This season the butterflies were all contained within the cages so they weren’t flying around.”
A hard effect to do was the comet. “All of the reference of comets are at night, so you have this nice contrast of a dark sky and a bright comet,” remarks Scott remarks. “This one was challenging because it was at daytime with a blue sky. It would be impossible to see a comet. That took a lot of look development in terms of how subtle or over the top the comet should be. Because of the look of The Great, I knew that we could go bigger than what would be photographically real, so we pushed it a bit more. We went down a DMP route for it and had an initial drawing of what we wanted the comet to be. You saw fragments breaking off of the back of it. Then we took it into Nuke and compositing where it was a case of dragging it out and warping it. That gave us time to turn around versions quickly and the flexibility to be able to change the timing, positions and size of the comet.”
The underwater scene was shot at the water tank situated at Pinewood Studios.
The bulk of the 369 visual effects shots by BlueBolt centered around creating an icy lake. “Because the time period was so compressed, we had to get versions out quickly to Tony to try to gauge his feeling on the overall mood of the icy lake,” Scott explains. “How atmospheric it was and what type of clouds? That slowly developed as we went through the process and into the DI where they were grading it darker and darker. We don’t have time for a long process of offering up little updates, so we have to show distinct and extreme versions to get to where we want to quickly and then bring it back.”
“We thought that the ice would be two or three feet thick and built cracks within it. When we started looking at reference, there was always these fissure lines going through the ice. We want to build that into the ice as well, so when we’re tracking over the ice you get this nice parallax that enables you feel the depth. We modeled that and put an ice shader on it, which gave us nice traction that goes through the ice.”
—David Scott, VFX Supervisor, BlueBolt
CG trees were created for the lake environment.
A photo scan was taken of the horse ridden by Peter as protection reference. Scott notes, “We took that photo scan without intending that Peter would be a CG character. The original idea for the shot was altered as the edit changed. How quickly he goes under was changing, so that forced us to go down a route of doing a CG horse. When Peter is underwater, we added a CG horse behind him. That went through a few rounds because it never felt right, and we didn’t want the horse to compete with Peter.”
A photo scan was taken of the horse that was utilized to produce a digital double for when it falls underneath the water.
“The [water] tank [at Pinewood Studios] wasn’t deep enough for Peter to go straight down. What we did was to go a few meters down and then drag him across the tank which was wider than it was deep. We then flipped him vertically with the effects. Nicholas Hoult (Peter) always kept his eyes open, which fit the moment much better.”
—David Scott, VFX Supervisor, BlueBolt
A patina was given to the ice to increase the believability that a horse could cross an icy lake without slipping.
Part of the look development was deciding upon the appearance of the ice. “We went for a more patina ice that has bits of frost on it, and is opaquer and milkier, so it feels like there is more grip to it for the horse to ride across without questioning whether it should be slipping,” Scott states. “We thought that the ice would be two or three feet thick and built cracks within it. When we started looking at reference, there was always these fissure lines going through the ice. We want to build that into the ice as well, so when we’re tracking over the ice you get this nice parallax that enables you feel the depth. We modeled that and put an ice shader on it, which gave us nice traction that goes through the ice.”
When looking at reference material, BlueBolt discovered that a distinguishing trait were fissure lines going through the ice.
“Just before he falls through is all real. It cuts to a close-up of Catherine [the Great]. Then it cuts back to Peter who turns around and falls through the ice, That whole moment was fully CG, so we didn’t have to do a split between live-action and CG. Because of the hard work of the artists involved, we managed to build high-quality digital doubles for Peter and the horse.”
—David Scott, VFX Supervisor, BlueBolt
For the underwater portion, footage was shot at the water tank situated at Pinewood Studios. “The tank wasn’t deep enough for Peter to go straight down,” Scott reveals. “What we did was to go a few meters down and then drag him across the tank which was wider than it was deep. We then flipped him vertically with the effects. Nicholas Hoult always kept his eyes open, which fit the moment much better.” The digital double of Peter was utilized for the scene where the ice breaks and he falls into the water. “Just before he falls through is all real. It cuts to a close-up of Catherine. Then it cuts back to Peter who turns around and falls through the ice, That whole moment was fully CG, so we didn’t have to do a split between live-action and CG. Because of the hard work of the artists involved, we managed to build high-quality digital doubles for Peter and the horse.”
By CHRIS McKITTRICK
Images courtesy of ESPN Creative Studio and Disney/Pixar.
“Toy” football players stood in for the real-life Atlanta Falcons and Jacksonville Jaguars players.
“Calvin Ridley said, ‘Trevor Lawrence, you’re my favorite deputy!’” For viewers of the Sunday, October 1, 2023 Atlanta Falcons and Jacksonville Jaguars National Football League game on the Disney+ and ESPN+ streaming services, that unlikely play-by-play call by Drew Carter made perfect sense in the context of the unique presentation of “Toy Story Funday Football,” a first-of-its-kind NFL game presentation that offered a fully animated live simulcast of the football game set in Pixar’s Toy Story Universe.
The broadcast is the latest innovative presentation in ESPN’s long-running “MegaCast” alternate presentations of live sports, an initiative the network began nearly 30 years ago when ESPN2 broadcast an in-car feed of the 1994 IndyCar Bosch Spark Plug Grand Prix as an alternative live presentation of the event for viewers. The presentations are bolstered by the work of ESPN Creative Studio, a division of ESPN’s production team that focuses on developing motion graphics and other assets to enhance live sports presentations on the ESPN family of networks, including the ESPN+ streaming service. In recent years, ESPN Creative Studio has explored dynamic and unique ways to present live sports to help inform and entertain audiences.
“When somebody suggested Toy Story, it was like boom, home run. You could just tell from the collective energy in the room.” In developing the idea, ESPN Creative Studio landed on the conceit that the game would take place on the floor of Andy’s Room, the primary setting of the first three Toy Story movies and the world in which fan-favorite characters like Woody, Buzz Lightyear, Rex, Bo Peep, and Hamm have had their most memorable adventures.”
—Michael “Spike” Syzkowny, Senior Director of Animation, Graphics Innovation & Production Design, ESPN Creative Studio
“Toy Story Sunday Football” builds on the previous success that ESPN Creative Studio had with another live sports simulcast presentation with Disney characters. In March 2023, ESPN presented the “NHL Big City Greens Classic,” a simulcast of a National Hockey League Washington Capitals and New York Rangers hockey game on Disney Channel, Disney XD, Disney+ and ESPN+ that animated the players in the style of the hit Disney Channel animated series Big City Greens. ESPN Creative Studio moved the action of the game from New York’s Madison Square Garden to an animated rink in the middle of the show’s city setting.
ESPN Creative Studio added acceleration effects for running plays.
The “NHL Big City Greens Classic” broadcast was made possible in part by Beyond Sports, a Dutch data visualization technology company that was acquired by Sony in July 2022. Beyond Sports’s technology has utilized player tracker data to create real-time alternate-reality presentations of sports games, including select NFL games that have been simulcast on the youth-oriented television network Nickelodeon beginning with an NFC Wild Card playoff game between the Chicago Bears and the New Orleans Saints in January 2021, which at points featured players, announcers and the crowd animated in block graphics (i.e., “blocky style”) similar to the style of popular video games like Minecraft.
The tracking data for the “Toy Story Funday Football” broadcast is provided by NFL’s Next Gen Stats, which captures the real-time location data, speed and acceleration for every player during an NFL game by utilizing sensors throughout the stadium that track radio-frequency identification (RFID) tags installed into each player’s shoulder pads as well as on the game’s officials, pylons, sticks, chains and even in the game ball. Since first installing RFID tags in players’ shoulder pads in 2014, Next Gen Stats has provided NFL teams with invaluable metrics on player performance with every single play of a game. From 2018 on, the Next Gen Stats data for each team has been shared league-wide to help support several NFL initiatives, including health and safety protocols.
“The trick for us was maintaining our rules for the Toy Story world, i.e., the toys can’t be alive when humans are around; they have to play dead. So, the football players had to be toys as well. We had to start with being in Andy’s Room and Andy is playing a mock football game with all his toys. We had to have that conceit before moving forward. Once we had that figured out, they wanted the maximum Toy Story involvement they can have, but we also wanted to make sure that it’s a football game you’re watching and not the Toy Story characters playing football.”
—Jay Ward, Creative Director of Franchise, Pixar
The practical purpose of collecting game tracking data also allows for unique presentations like “Toy Story Funday Football.” The primary motivation behind these alternate simulcasts is to bring new audiences to the broadcast. In that respect, the experiments have been an overwhelming success. After the airing of the “NHL Big City Greens Classic,” ESPN reported that the game drew a much younger audience than a typical NHL broadcast, with viewer median ages of 12 (on Disney XD) and 14 (on Disney Channel) – decades younger than the median age for general NHL game viewership on television (approximately 52 years old). It also drew a female-majority audience (58.7%) in contrast to the estimated 37% female viewership of typical NHL games.
“Audience expansion is a big thing for us with viewers changing habits,” remarks Michael “Spike” Syzkowny, Senior Director of Animation, Graphics Innovation & Production Design at ESPN Creative Studio, who has spent more than two decades with ESPN and has won 10 Sports Emmy Awards for his production work on their broadcasts. “The MegaCast is what kicked it off for us – the granddaddy of them all. When we did ‘Big City Greens,’ that was really successful. I go back and I watch that, and I think we nailed it there. It felt really good and got a lot of positive publicity. The other leagues looked around and said, ‘Hey, is there something we can do like that with ESPN?’”
After the success of the ‘Big City Greens’ broadcast, ESPN Creative Studio and the NFL discussed how they could do a similar presentation for an NFL game. “We thought, how can we build upon what we did?” Syzkowny notes. “Because this technology is never easy. Everything that goes into this is very innovative. You’re breaking new ground, and when you’re doing that you’re solving for a lot of challenges.”
The “field” in Andy’s Room was captured in several camera angles.
In determining the right IP to use for the network’s first foray into a real-time alternate presentation of an NFL game, ESPN has the advantage of being a subsidiary of the Walt Disney Company. Because of that arrangement, ESPN Creative Studio had potential access to decades of the corporation’s beloved characters. However, the team knew it hit the right concept with Pixar’s Toy Story. “When somebody suggested Toy Story, it was like boom, home run,” Syzkowny recalls. “You could just tell from the collective energy in the room.” In developing the idea, ESPN Creative Studio landed on the conceit that the game would take place on the floor of Andy’s Room, the primary setting of the first three Toy Story movies and the world in which fan-favorite characters like Woody, Buzz Lightyear, Rex, Bo Peep and Hamm have had their most memorable adventures.
Though ESPN and Pixar share the same corporate parent in Disney, ESPN Creative Studio still needed to pitch the idea to the groundbreaking animation studio to demonstrate that the broadcast would not only be viable but would also adhere to Pixar’s high standards of animation and storytelling quality that have helped make Toy Story a beloved franchise that has entertained audiences worldwide for nearly 30 years – across feature films, shorts, theme park attractions and merchandising.
To demonstrate proof of concept, ESPN Creative Studio developed a mock-up of what a toy football game would look like on the floor of Andy’s Room. The mock-up was presented to the team of Pixar, including Jay Ward, Creative Director of Franchise at Pixar. “The level of quality without our input already looked really good,” Ward shares. “They pulled off quite a bit with just this test that they did, so it was a good starting point.”
Ward’s approval of the project was essential for it to move forward. In his role at Pixar, Ward is responsible for overseeing how the company’s intellectual property is utilized in outside media. He began his career at Pixar in production working in the art department on films like Monsters, Inc. and Cars. Following the explosive popularity of Cars, Ward was assigned to creatively oversee Cars as a franchise as the property expanded into shorts, theme park attractions and other media. His role later expanded to oversee all franchises at Pixar.
“Setting up the studio and making sure that everything works on the spot when you bring up the system – you’re combining that in real-time. You can’t say, ‘Well, the first quarter didn’t go well, let’s go back and re-do it.’ It has to happen. We do rehearsals because the director cutting the game does not see real cameras.”
—Michael “Spike” Syzkowny, Senior Director of Animation, Graphics Innovation & Production Design, ESPN Creative Studio
For Ward and Pixar, it was important that the project not only adhered to Pixar’s expectations and standards for animation but also its parameters for the “world” of the Toy Story characters. “Authenticity is a big word to us, making sure things are authentic to those worlds,” Ward says. “Keeping the storytelling intact – what would Woody or Buzz do or not do during a football game in Andy’s imagination – what rules do we set for that?”
Similarly, both ESPN Creative Studio and the NFL had high expectations for the presentation of the football game as well, as the concept would inevitably fall apart if the animated presentation did not allow for fans to follow the game’s action. “Of course, everyone is very protective of their IP, as they should be,” Syzkowny points out. “The NFL wants us to present an authentic game that is still fun and speaks to what we’re trying to do, but it still needs to have a certain standard. And it’s hard to find anyone who doesn’t love Toy Story, so we have to be respectful of that. With our talent, we have to make sure they are comfortable with calling the game as animated characters, which doesn’t happen every day. Combining those things is the challenge and being respectful of everybody’s positions amongst those things.”
“The trick for us was maintaining our rules for the Toy Story world, i.e., the toys can’t be alive when humans are around; they have to play dead,” Ward adds. “So, the football players had to be toys as well. We had to start with being in Andy’s Room and Andy is playing a mock football game with all his toys. We had to have that conceit before moving forward. Once we had that figured out, they wanted the maximum Toy Story involvement they can have, but we also wanted to make sure that it’s a football game you’re watching and not the Toy Story characters playing football.”
With the Atlanta Falcons and Jacksonville Jaguars players in formation, fan-favorite Toy Story characters can be seen watching the action.
The NFL and ESPN selected the October 1 game between the Atlanta Falcons and Jacksonville Jaguars, one of three 2023 NFL International Series games, that was scheduled to be held at Wembley Stadium in London as an ideal game for the “Toy Story Funday Football” broadcast. Because the game was played in the U.K., it meant that the game was scheduled for the family-friendly time of Sunday morning across the U.S. Once approved, ESPN Creative Studio worked on the production of the broadcast for approximately three months, working on it through game time.
Much of that work focused on preparing the technology to ensure a smooth broadcast. Brooklyn, New York-based Silver Spoon Animation assisted ESPN with the mocap of the announcers. “Setting up the studio and making sure that everything works on the spot when you bring up the system – you’re combining that in real-time,” Syzkowny says. “You can’t say, ‘Well, the first quarter didn’t go well, let’s go back and re-do it.’ It has to happen. We do rehearsals because the director cutting the game does not see real cameras.”
Pre-production work also ensures that the player animations will look fluid no matter how the athletes move on the real-life field. To assist with the animation, Pixar provided ESPN Creative Studio with Toy Story assets, models and “toolkit animations,” which Ward describes as “pre-done animation that is usually used for promotional purposes like TV bumpers and trailers.”
“A couple of pieces on the data visualization side were exciting challenges we undertook. The first, combining active tracking data and optical tracking data was unique for this experience. This was the first time anyone has combined the two sources of data to output in a virtual recreation in near real-time (less than 0.5s). The second was seamlessly blending between single-point and limb tracking-based animations.”
—Sander Schouten, CEO and Co-Founder, Beyond Sports
“Player tracking is always a challenge to make the players’ movements as realistic as possible,” Syzkowny explains. “Working with Pixar’s IP, they want their characters to move a certain way and act a certain way. While they gave us source material and a toolkit to work with, sometimes that’s harder because you have to match what their stuff does.”
Ensuring the proper movement for the characters posed new tests for Beyond Sports. “A couple of pieces on the data visualization side were exciting challenges we undertook,” notes Sander Schouten, CEO and Co-Founder of Beyond Sports. “The first, combining active tracking data and optical tracking data was unique for this experience. This was the first time anyone has combined the two sources of data to output in a virtual recreation in near real-time (less than 0.5s). The second was seamlessly blending between single-point and limb tracking-based animations.”
Syzkowny also notes that Beyond Sports’ player tracking technology has existed for several years now, but the challenge of utilizing is determining how to best apply it to storytelling. “I think the key to this is how do you do it in a way that still connects with a broad audience of people,” Syzkowny says. “You have to make sure that it comes down to the storytelling aspect and how you make it feel like something new. A lot of what we did with ‘Big City Greens’ as well as ‘Toy Story’ is cutting-edge technology, but it’s all been around there for a while. How are we combining it to make it into something that feels new and unique?”
The drive to make “Toy Story Funday Football” feel new and unique led to advancements on the part of Beyond Sports to bring new visual elements to the presentation of NFL games. “Beyond Sports’s data visualization technology has been continuously evolving since the development of our blocky characters, and can now produce ‘blocky,’ humanoid and stylized IP-based animated characters, as done for the ‘Big City Greens’ and ‘Toy Story’ alternative casts,” according to Schouten. “Regarding the production itself, the technology’s ability to provide unique perspectives or viewpoints of the game (i.e., first-person view, or in the ‘Toy Story’ case, a ‘helmet cam’), and to integrate more directly with sport-specific elements, such as sideline triggered events, officiating triggered events, etc., are additional advancements Beyond Sports have worked on.”
ESPN Creative Studio rendered the “stadium” for “Toy Story Funday Football,” placing the action on the floor of Andy’s Room, the setting of the first three Toy Story movies.
A highlight of the production for ESPN Creative Studio was developing a special halftime show featuring a motorcycle jump by daredevil Duke Caboom, a fan-favorite character introduced in 2019’s Toy Story 4. “The Duke Caboom Daredevil Spectacular came out of us sitting around thinking, ‘Okay, we have halftime – think about the old Evel Knievel videos or Travis Pastrana for the next generation,” Syzkowny explains. “We have this character that loves to jump stuff. Why don’t we put on his own halftime jump in Andy’s room?’”
Like the rest of the broadcast, the halftime sequence was entirely animated by ESPN Creative Studio with a Pixar animator advising ESPN Creative Studio. “When they did the initial Duke Caboom jump, we took it and showed it to a Pixar animator and asked him for notes,” Ward says. “He gave that to them, and they were really good about collaborating and addressing those notes, and it got to a really good place.”
The production of the halftime show vignette, which almost serves as a new Toy Story short in itself, is just one example of the close collaboration between ESPN Creative Studio and Pixar for the broadcast. “We set the parameters and guardrails, and we said, ‘Come back and show it to us,’” Ward notes. “Pixar doesn’t expect things to be perfect. We’d rather see it early so we can give you notes, affect those things and have you make the best thing possible than for them to wait until the last minute because it wasn’t good enough yet. That’s not really collaboration. We really do prefer a collaborative culture because that’s how we work and that’s how they work, which has been really nice.”
For ESPN Creative Studio, which is constantly working on the production of upcoming broadcasts throughout the year. Working on elements for the various broadcasts often requires a versatile approach. “We use all the tools in our toolbox – we use Unreal, Octane, Cinema 4D,” Syzkowny says. “Whatever works for whatever piece we need to do, that’s what we’re going to apply to it just to make it as best as it can be.”
The experience working with ESPN Creative Studio on the project was also enlightening for the team at Pixar. “ESPN was incredibly fast at this, I think because they do television,” Ward notes. “Every day they are throwing stuff up in real-time, while we’re very slow. I think the thing for me is, could we work quicker? Is there something we could learn? We have a very different business model because we make films one movie at a time and it takes four to six years to make one film, and it takes them four to six days to make one sequence. Maybe there’s something to learn there. I think the bigger excitement for me is that we are maybe reaching people who otherwise are not football fans that will tune in to watch it and learn about the game, and hopefully for the NFL they get some football fans of theirs that didn’t know much about animation who now will want to see some Pixar films. Who knows? It’s been a really great collaboration. It’s been a lot of fun.”
Watch highlights of the October 1 broadcast of “Toy Story Sunday Football,” an NFL game animated in real-time and set in Andy’s Room of Toy Story. Click here: https://www.youtube.com/watch?v=-02LiBdS1BQ
By CHRIS McGOWAN
Wētā’s adapted new deep learning methodologies and utilized neural networks for Avatar: The Way of Water. (Images courtesy of 20th Century Studios)
Wētā’s adapted new deep learning methodologies and utilized neural networks for Avatar: The Way of Water. (Images courtesy of 20th Century Studios)
Crafty Apes’s AI division came into existence during the pandemic. Company Co-Founder Chris LeDoux recalls, “It all started during COVID with me watching YouTube videos from authors like Bycloud and Two Minute Papers. Then our VFX Supervisor and resident mad scientist Aldo Ruggiero began to show me a number of incredible things he was using AI for on the film he was supervising.” It became clear to LeDoux “that AI was going to shake up our industry in a massive way.” He explains, “Developments in AI/ML seemed like they would create a fundamental shift in how we approached and solved problems as it relates to shot creation and augmentation. I knew we had to make it a top priority.” Since then, Crafty Apes has applied AI to a wide range of VFX projects, reflecting an accelerating implementation of AI technology by the visual effects industry.
LeDoux comments, “I can tell you that we have leveraged machine learning [ML] for tasks like deepfake creations, de-aging effects, facial manipulation, rotoscoping, image and video processing and style transfer, and the list continues to grow.” He notes that once AI tools are integrated into the pipeline, they “speed up the workflow drastically, lower the costs of VFX significantly, and allow the artists to put more time into creativity.”
Machine learning helped Digital Domain meet its deadlines for She Hulk: Attorney at Law. (Images courtesy of Marvel Studios)
Regarding the teaming up of AI with VFX, “the first challenge is really managing expectations, in both directions,” says Hanno Basse, Chief Technology Officer of Digital Domain. He adds, “We shouldn’t overestimate what AI will be able to do, and there is a lot of hype out there now. At the same time, it will have a significant and immediate impact on all aspects of content creation, and we need to recognize the consequences of that.”
Digital Domain
The industry is looking at “many concepts and implementations for AI and ML that are very promising, and [is] using some of them already today,” according to Basse. Digital Domain has utilized machine learning on high-profile movies such as Avengers: Infinity War and Avengers: Endgame, and She-Hulk: Attorney at Law limited series for Disney+. It also created a 3D “visual simulation” of famed NFL coach Vince Lombardi – with the help of Charlatan, Digital Domain’s machine learning neural rendering software – for the February 2021 Super Bowl.
Machine learning helped Digital Domain meet its deadlines for She Hulk: Attorney at Law. (Images courtesy of Marvel Studios)
Rising Sun Pictures’ machine learning incorporated data from a “learned library of reference material” to help create Baby Thor for Thor: Love and Thunder. An early adopter of AI, RSP used machine learning to give the baby and uncannily lifelike quality while exhibiting behaviors required by the script. (Images courtesy of Marvel Studios)
“AI, and especially its close cousin, machine learning, have been in our toolbox for five years or so. We use it on things like facial animation, face-swapping, cloth simulation and other applications,” Basse says. “Our work on She-Hulk last year made extensive use of this technology. In fact, we don’t believe we could have delivered that many shots without it given the time and resources we had to work on this project. [We also did] some fantastic work with cloth simulation on Blue Beetle. We’re basically using this technology now on virtually any show we get to work on.”
Prior to that, the digital creation of the character Thanos’ face in Avengers: Infinity War was Digital Domain’s first major application of machine learning and utilized the Masquerade facial-capture system. Avengers: Endgame followed close on its heels. “Since then, DD has done a lot more work with this technology,” Basse remarks. “For example, we created an older version of David Beckham for his ‘Malaria Must Die – So Millions Can Live’ campaign and used our ML-based face-swapping technology Charlatan to bring deceased Taiwanese singer Teresa Teng back to life, virtually.”
Basse adds, “In general, machine learning has proven very useful to help create more photorealistic and accurate results. But it’s really the interplay of AI and the craft of our artists – which they acquired over decades, in many cases – that enables us to create believable results.”
Wētā FX
“We have been working with various ML tools and basic AI models for a long time,” says Wētā FX Senior Visual Effects Supervisor Joe Letteri. In fact, Massive software, employed all the way back on The Lord of the Rings, uses primitive fuzzy logic AI to drive its agents. Letteri notes, “Machine learning has also been prevalent in rendering for de-noising for years across the industry. For Gemini Man we used a deep learning solver to help us achieve greater consistency with the muscle activations in our facial system. It helped us streamline the combinations that were involved in complex movements across the face to build a more predictable result.”
Wētā changed its facial animation system for Avatar 2 and adapted new deep learning methodologies. Letteri says, “Our FACS-based facial animation system yielded great results, but we felt we could do better. As our animators and facial modelers got better, we needed increasingly more flexible and complex systems to accommodate their work. So, we took a neural network approach that allowed us to leverage more of what the actor was doing and hide away some of the complexity from the artists while giving them more control. We were also able to get more complex secondary muscle activations right from the start, so the face was working as a complete system, within a given manifold space, much like the human face,”
Letteri and his crew created another neural network to do real time depth compositing during live-action filming. He explains, “During that setup process, we utilized rendered images to train the deep learning model in addition to photographed elements. This allowed us to gather more reference of different variations and positions than we could feasibly get onset. We could train the system to understand a given set environment and the placement of characters in nearly every position on the set in a wide range of poses – something that would be impractical to do with actors on a working film set.”
Comments Letteri, “VFX pipelines are always evolving, sometimes driven by hardware or software advancements, sometimes through new and innovative techniques. There is no reason to think that we won’t find new ways to deploy AI-enhanced workflows within VFX. Giving artists ways to rapidly iterate and explore many simultaneous outcomes at the same time can be enormously powerful. It also has great potential as a QC or consistency tool, the way many artists are using it now.”
Autodesk
“AI has the potential to be revolutionary for VFX as artists design and make the future,” says Ben Fischler, Director of Product Management, Content Creation at Autodesk. “The Internet took shape over many years, and it took time for it to become a part of our daily lives, and it will be similar with AI. For the visual effects industry, it’s all about integrating it into workflows to make them better. It won’t be an immediate flip of the switch, and while certain areas will be rapid, others will take longer.”
It has been more than two years since Autodesk embraced AI tools in Flame. “Flame puts a sprinkle of AI into an artist’s workflow and supercharges it dramatically. Things like rotoscoping, wire removal and complex face matte creation are processes that go back to the origins of visual effects when we did things optically, not digitally, and they’re still labor intensive. These are the processes where a little AI in the right places goes a long way,” Fischler explains. “In the case of Flame, we can take a process that had an artist grinding away for hours and turn it into a 20-minute process.”
Autodesk recently launched a private beta of Maya Assist in collaboration with Microsoft. “It was developed for new users to Maya and 3D animation and uses voice prompts via ChatGPT to interface with Maya,” Fischler says.
Data on a real baby was collected from the grandson of a former Disney executive in order to create the digital Baby Thor in Thor: Love and Thunder. (Images courtesy of Marvel Studios)
Soccer icon David Beckham participated in the ‘Malaria Must Die – So Millions Can Live’ campaign. His face was aged into his ‘70s by Digital Domain’s Charlatan technology. The video was produced by the Ridley Scott Creative Group Amsterdam. (Images courtesy of Digital Domain)
Wētā utilized a “deep learning solver” to achieve greater consistency with the muscle activations in the facial system for Gemini Man. (Images courtesy of Wētā FX and Paramount Pictures)
Rising Sun Pictures
Some five years ago, RSP began collaborating with the Australian Institute for Machine Learning (AIML), which is associated with the University of Adelaide, on ways to incorporate emerging technologies into its visual effects pipeline. AIML post-doctoral researchers John Bastian and Ben Ward saw the potential for AI in filmmaking and joined RSP; they now lead its AI development team with Troy Tobin.
One of the multiple projects that benefited from their work was Marvel’s Thor: Love and Thunder, in which RSP applied data collected from a human baby (the grandson of former Disney CEO Bob Chapek) to a CG infant. Working in tandem with the film’s production team, they were able to “direct” their digital baby to perform specific gestures and exhibit emotions required by the script. According to the Senior VFX Producer on the film, Ian Cope, quoted on the RSP website, “The advantage of this technique over standard ‘deep fake’ methods is that the performance derives from animation enhanced by a learned library of reference material.” The look was honed over many iterations to achieve a digital baby that would seem real to audiences.
“The work we’re doing is not just machine learning,” adds Ward, now Senior Machine Learning Developer at RSP. “Our developers are also responsible for integrating our tools into the pipeline used by the artists. That means production tracking and asset management and providing artists with the control they need from a creative point of view.”
Working together across several projects, the AI and compositing teams have grown in their mutual understanding. Having explored this space early, “we’ve learned a lot about how the two worlds collide and how we can utilize their [AI] tools in our production environment,” observes RSP Lead Compositor Robert Beveridge. “The collaboration has improved with each project, and that’s helped us to one-up what we’ve done before. The quality of the work keeps getting better.”
Jellyfish Pictures
“AI and ML offer exciting opportunities to our workflows. and we are exploring how to best implement them,” says Paul J. Baaske, Jellyfish Pictures Head of Technical Direction. “For example, how we can leverage AI to create cloth and muscle simulation with higher fidelity. This is a really intriguing avenue for us. Other areas are in imaging – from better denoising, roto-masks, to creating textures quicker. But some of the greatest gains we see [are] in areas like data or library management.”
Baaske adds, “The key moving forward will be for studios to look at their output and data through the lens of ‘how can we learn and develop our internal models further?’ Having historic data available for training and cleverly deploying it to gain competitive advantage can help make a difference and empower artists to focus more on the creative than waiting for long calculations.”
Vicon
“One of the most significant ways I think AI is going to impact motion capture is the extent to which it’s going to broaden both its application and the core user base,” comments David “Ed” Edwards, VFX Product Manager for Vicon. “Every approach to motion capture has its respective strengths and shortcomings. What we’ve seen from VFX to date – and certainly during the proliferation of virtual production – is that technical accessibility and suitability to collaboration are driving forces in adoption. AI solutions are showing a great deal of promise in this respect.”
Edwards adds, “The demands and expectations of modern audiences means content needs to be produced faster than ever and to a consistently high standard. As AI is fast becoming ubiquitous across numerous applications, workflows and pipelines, it’s already making a strong case for itself as a unifier, as much as an effective tool in its own right.”
Studio Lab/Dimension 5
“I think with the use of AI we will see many processes be streamlined, which will allow us to see multiple variations of a unique look,” says Ian Messina, Director of Virtual Production at Studio Lab and owner of real-time production company Dimension 5. Wesley Messina, Dimension 5 Director of Generative AI, says, “Some trailblazers, like Wonder.ai, are pushing the boundaries of technology by developing tools that can turn any actor into a digital character using just video footage. This gets rid of the need for heavy motion-tracking suits and paints a promising picture of what’s to come in animation.”
Wesley Messina adds, “As the technology becomes more widely available, we can expect to see AI tools being used by more and more creators. This will change the way we make movies and other visual content, bringing stories to life in ways we’ve never seen before.”
Digital Domain created the digital face of Thanos in Avengers: Infinity War with the Masquerade system and machine learning, and then worked their magic again in Avengers: Endgame. (Images courtesy of Marvel Studios)
Digital Domain used their Charlatan technology and machine learning to create a CGI likeness of the late Taiwanese singer Teresa Teng for a virtual concert that mesmerized fans. (Images courtesy of Digital Domain, Prism Entertainment and the Teresa Teng Foundation)
“The Internet took shape over many years, and it took time for it to become a part of our daily lives, and it will be similar with AI. For the visual effects industry, it’s all about integrating it into workflows to make them better. It won’t be an immediate flip of the switch, and while certain areas will be rapid, others will take longer.”
—Ben Fischler, Director of Product Management, Content Creation, Autodesk
Autodesk’s Maya Assist has a ChatGPT assistant. (Image courtesy of Autodesk)
Autodesk Flame software offers the ability to extract mattes of the human body, head and face with AI-powered tools for color adjustment, relighting and beauty work, as well as to quickly isolate skies and salient objects for grading and VFX work.
(Images courtesy of Autodesk)
Perforce and VP
Rod Cope, CTO of Perforce Software, sees AI as having a big impact on virtual production. He explains, “For one, AI is going to let creative teams generate a lot more art assets, especially as text-to-3D AI tools become more sophisticated. That will be key for virtual production and previs. Producers and art directors are going to be able to experiment with a wider array of options, and I think this will spur their creativity in a lot of new ways.”
The Synthetic World
Synthesis AI founder and CEO Yashar Behzadi opines that synthetic data will have a transformative impact on TV and film production in a number of areas, such as virtual sets and environments, pre-visualization and storyboards, virtual characters and creatures, and VFX and post-production.
Behzadi continues, “The vision for Synthesis AI has always been to synthesize the world. Our team consists of people with experience in animation, game design and VFX. Their expertise in this field has enabled Synthesis AI to create and release a library of over 100,000 digital humans, which serves as the training data for our text-to-3D project, Synthesis Labs.”
More on GenAI
“Now, with the emergence of more sophisticated generative AI models and solutions, we’re starting to look at many more ways to use it,” explains Digital Domain’s Basse. “Emerging tools in generative AI, such as ChatGPT, MidJourney, Stable Diffusion and RunwayML, show a lot of promise.”
Basse continues, “GenAI is really good to start the creative process, generating ideas and choices. GenAI does not actually generate art, it creates variants and choices which are based on prior art. But this process can provide great starting points for concept art. But the ultimate product will still come from human artists, as only they really know what they want. Having said that, I have high expectations for the use of GenAI technology in storyboarding and previsualization. I believe we will see a lot of traction with GenAI in those areas very soon.”
Autodesk’s Fischler notes, “Having the ability to generate high quality assets would be very impactful to content creators in production. but the challenge is making these assets production-ready for film, television or Triple A games. We are seeing potentially useful tools on the lower end, but it’s much harder to have AI generate useful assets when you have a director, creative director and animation supervisor with a creative vision and complex shot sequence to build.”
Wes Messina adds that text-to-3D-model technology “could be a game-changer, moving us away from the hard work of starting from scratch in developing 3D assets.”
LeDoux argues, “However, it’s important to remember that AI-generated concept art isn’t here to replace human creativity. Instead, it’s a rad tool that can add to and improve the artistic process. By using these AI technologies, artists can focus on the creative side of their work and bring the director’s vision to life more effectively, leading to super engaging and visually stunning productions.”
VFX Taskmasters
Overall, AI will help with many tasks. LeDoux comments, “If we divide it up into prep, production, and post-production, and then think of all of the aspects of VFX for each one, you can help wrap your mind around all of the applications. In prep, having generative tools such as Stable Diffusion to help create concept art is obvious, but other tools to help plan, such as language models to help parse the script for VFX-based bidding purposes, as well as planning via storyboarding and previz is massive. In production, having tools to help with digital asset management, stitching and asset building for virtual production is a massive time saver. In post-production, the list is endless from rotoscope assistance to color matching to animation assistance.”
“We think that AI will impact our entire workflow,” Basse says. “There are so many scenarios we can think about: creating 3D models with text prompts, creating complex rigs and animation cycles, but we also see potential applications in layout, lighting, texture and lookdev. There is also an expectation that machine learning will revolutionize rotoscoping, which is a very labor-intensive and tedious part of our workflow today.”
Perforce’s Cope adds, “AI is going to have an impact on quality assurance and workflow as well. I think we will see AI automate some of the more rote tasks in 3D animation, like stitching and UV mapping, and identifying rendering defects – things that take time but don’t require as much creativity. AI is going to accelerate those tasks. And, since AI allows teams to go faster, directors will demand even more with quicker turnarounds. Teams that don’t adopt AI in their workflows will be left behind sooner than later.”
VFX tasks that will benefit from AI also include object removal, matchmoving, color grading, and image upscaling and restoration, according to Synthesis AI’s Behzadi.
AI, VR and Video Games
It’s easy to imagine that AI could give a big boost to video games and VR by vastly increasing interactivity and realism. “Thinking on another level, I think that games as we know them will change,” Cope says. For example, “AI is going to open the doors for more natural and unique interactions with characters in an RPG. And could even lead to completely unique in-game experiences for each player and journey.”
Synthesis AI’s Behzadi comments, “Virtual reality experiences can be greatly enhanced by AI in several ways, including digital human development, enhanced simulations and training, as well as computer vision applications, to name a few.”
Behzadi continues, “AI can generate realistic digital humans or avatars that can interact with users in real-time. These avatars can understand and respond to users’ gestures, facial expressions and voice commands, creating more natural and engaging interactions within virtual environments. When coupled with computer vision techniques, AI has a powerful impact on enhancing the visual quality of VR experiences, including improved graphics rendering, realistic physics simulations, object recognition, and tracking users’ movements within the virtual environment. These advancements ultimately lead to more visually stunning and immersive VR worlds.”
The Road Ahead
Looking ahead, LeDoux opines, “While it’s true that AI, in its essence, is an automation tool with the potential to displace jobs, historical precedents suggest that automation can also stimulate job creation in emerging sectors.” A look back at the last quarter-century provides a good understanding of this trend, he notes. “The VFX industry has seen exponential growth, fueled largely by clients demanding increasingly complex visual effects as technology progresses.” LeDoux adds that AI will bring significant improvements in the quality and accessibility of visual effects, “thereby enhancing our capacity for storytelling and creative expression.”
Letteri comments, “In VFX we are always looking for new ways to help the director tell their story. Sometimes this is developing new tools that enable greater image fidelity or more sophisticated simulations of natural phenomena – and sometimes they are about finding ways to do all of that more efficiently.” Basse concludes, “Not a day goes by where we don’t see an announcement from our tool vendors or new startups touting some new accomplishment relating to content creation using AI and ML. It’s a very exciting time for our industry. For many applications, there is still a lot of work to be done, but this technology is evolving so rapidly that I think we need to measure these major advancements in months, not years.”
by NAOMI GOLDMAN
Janet Muswell Hamilton, VES, Senior Vice President, Visual Effects, HBO
How do we attract and retain a diverse pipeline of VFX and entertainment tech professionals to carry our global industry forward? How can women leaders help achieve parity and diversity in the workplace? How do we create a productive, inclusive culture that supports workforce development, equity and advancement? The issues of diversity, equity and inclusion are paramount and complex, and the clarion call for efforts to compel systemic and sustainable progress is loud and unwavering.
For the past four years, the VES and Autodesk have fueled a partnership dedicated to lifting up voices from often underrepresented communities through our “Ask Me Anything: VFX Pros Tell All” initiative. Working with Autodesk, we have interviewed more than two dozen professionals from diverse backgrounds. These industry leaders have shared lessons learned from their personal career journeys, brought forth bold actions their companies are undertaking to move the needle, and issued calls to action to their peers and colleagues to join the DEI movement.
This year, we sat down and hosted panel conversations with five extraordinary women leading the charge in visual effects and media & entertainment tech and gleaned their insights and ideas. Lending their voices to this ongoing conversation are: Leona Frank, Director of Media & Entertainment Marketing, Autodesk; Nina Stille, Director of Global Diversity & Inclusion Partners, Intel; Barbara Marshall, Global M&E Industry Strategy Lead, Z by HP; Janet Lewin, Senior Vice President, Lucasfilm VFX & General Manager, ILM; and Janet Muswell Hamilton, VES, Senior Vice President of Visual Effects, HBO. As individual leaders and as a collective, they exemplify what can be done to address shared challenges and achieve a shared vision for the global entertainment industry.
VFXV: In service of creating an inclusive work environment for your employees, what are some of the programs and initiatives offered by your company?
Leona Frank, Autodesk: We have nine “Employee Resource Groups,” including groups for Black, Latinx, LGBTQ+ and neurodiverse employees, because we want everyone to feel represented and a sense of inclusion. Anyone can join the groups as allies – you don’t have to be a member of that community. And because running these employee-led groups takes considerable volunteer time, we provide stipends for that important labor and emotional labor, in recognition of our employee dedication, and ensure that each group has the support of an executive sponsor.
Janet Lewin, ILM/Lucasfilm: When I was coming up the industry, there wasn’t a language for what it took to develop the confidence and competencies to self-advocate and help navigate obstacles in the male-dominated industry. So, we are really investing in tools to help underrepresented groups and women specifically. “Promote Her” was piloted at ILM and focuses on teaching ‘soft skills’ – how to advocate for yourself, raise your hand for consideration, network, and combat that sense of imposter syndrome.
Nina Stille, Intel: We offer a wonderful service to all employees, a confidential “Warm Line,” staffed by advisors trained in resilience, who are available to provide counsel and discuss challenges and solutions. We also have our “Talent Keepers” program, aimed at engaging mid-level Black employees in the U.S. and Costa Rica and their managers. It helps employees with career empowerment and it fosters best practices for managers. Since the tracks ultimately merge, the co-creation of career development plans by employees and their supervisors has resulted in higher employee promotions rates and less race and gender bias in management practices.
Barbara Marshall, HP: We are quite bold in the sustainability and diversity realms; our goal is to be the most sustainable and just global technology company. Our strategy is built around three pillars – climate action, human rights and digital equity – interlinked and with specific programs. The company is very transparent in sharing our progress to meet our goals of achieving gender parity in 2023 and doubling the number of Black executives by 2025. I’m proud that our Board is already 46% women and that we have had two female CEOs, which is unique for a blue-chip, publicly-traded company.
VFXV: You are all committed to building the pipeline to bring up new talent. What are you doing and what needs to be done in the areas of recruitment and outreach to help bring diverse voices to the forefront?
Janet Muswell Hamilton, HBO: We are making strides, but to create that kind of rich workforce, we need to invest more in early education to middle school and high school students. We all need to do more to showcase that these kinds of jobs in VFX and entertainment exist, that they are exciting and viable careers, demystify our industry and lower the bar for entry. We need to go beyond the schools where we tend to cultivate almost exclusively white men and expand our horizons in every aspect. I also appreciate the renewed focus I’m seeing in programs that look at training and retraining people who may have left the workforce, are looking to transfer from another industry or have valuable lived experience – like the work we are doing in the VES Education Committee to develop new career pathways for veterans.
Nina Stille, Intel: I’m proud of our “Relaunch Your Career” program, which we initially piloted in 2019 to help people who took a career break (parents, caregivers) to re-enter the workforce. This is an untapped and highly experienced workforce, often overlooked because of a break in their résumé, but rich in unique and valuable soft skills. In 2022, we hired 80 contractors for a 16-20 week ‘return-ship’; 87% of those people were converted to full-time roles and 88% of those converted were women and people from underrepresented communities. We are proud of the results and want to keep investing in efforts like these. We also have deep ties to Historically Black Colleges and Universities (HBCU) and partner with these academic institutions and organizations like AfroTech and Lesbians Who Tech to help identify and develop talent that encompasses a diverse spectrum of voices and experience.
Janet Lewin, Senior Vice President, Lucasfilm VFX & General Manager, ILM
Leona Frank, Director of Media & Entertainment Marketing, Autodesk
Nina Stille, Director of Global Diversity & Inclusion Partners, Intel
“[A] lot of what we see is teaching women or underrepresented minorities how to be successful in existing environments versus rebuilding environments to make space for different ways of being and doing that diverge from the dominant culture.”
—Nina Stille, Director of Global Diversity & Inclusion Partners, Intel
Barbara Marshall, HP: We also work with HBCUs and are a founding member of the HBCU Business Deans Roundtable, and introduced the business case competition, now in its fourth year, which gives students opportunities to develop solutions to real HP business problems and get hands-on experience. As part of our DEI strategy, we also overhauled our internship and graduate recruitment strategy – if you keep recruiting from the same places that are predominately white in the prospective talent pool, you won’t improve your diversity. This needs to be a holistic approach looking at every level from early education to pipeline cultivation to meaningful change in how we all approach hiring, training and employee support.
VFXV: Creating work environments that are more conducive to caregivers and working parents, especially amidst a childcare crisis, exacerbated by the pandemic, is a big topic of discussion in many industries. How does your company approach these issues?
Leona Frank, Autodesk: This topic is close to me, as I had two children during COVID. We offer a number of programs including “Flex Forward,” offering hybrid models for people to work from home, which has been a real game-changer and resonates really well with parents and caregivers. We also have working rooms at conferences where moms can pump, and we also pay for breast milk to be shipped home by our working moms while traveling. These kinds of programs put people at the center and help us look at how we can enable them to be their best self and get their best work done.
Nina Stille, Intel: I also had a COVID baby and so this resonates for me. We offer a hybrid and flexible work model that helps accommodate working parents and caregivers. Beyond providing additional paid medical leave, when people are aiming to reintegrate into the workforce we offer pathways to work part-time with full-time pay for a period of time to get back into the work rhythm. Through external vendors we also offer forums to join coaching sessions with other birthing parents to tap into community support, and financial assistance and priority no-fee enrollment for local childcare facilities and emergency no-fee childcare. These are all really attractive benefits for prospective and existing employees.
VFXV: There is often an onus on women and people from underrepresented communities to solve the inequities they did not create and lead the way forward. What can allies do to engage in this work around diversity, equity and inclusion to help create meaningful progress and change?
Janet Lewin, ILM/Lucasfilm: Be a mentor. Take someone under your wing and formalize that relationship and bring them into your process and mindset. For many people, the process is daunting, and being on set and working with filmmakers requires a lot of tutelage. Raise your hand, use your platforms and step up to help develop someone’s career and enable them to be successful. Align with vendors who share your company’s priorities and commitment to change. And think about hiring for potential and taking a leap of faith on someone.
Janet Muswell Hamilton, HBO: I agree, mentorship is essential and helping people to grow and learn on-set etiquette – which can be a minefield – and is an important element in and of itself worthy of more training and educational tools. One idea to get someone ready for success to ‘over hire’ on a project – hire a second supervisor with enormous potential, mentor them, give them a safe landing to ask questions and feel supported. Job shadowing and job sharing are great models to integrate into talent pipeline cultivation. That kind of on-the-job training and exposure can mean the difference in getting someone ready to play more of a senior role when the next opportunity arises and be successful.
Barbara Marshall, Global M&E Industry Strategy Lead, Z by HP
“As part of our DEI strategy, we also overhauled our internship and graduate recruitment strategy – if you keep recruiting from the same places that are predominately white in the prospective talent pool, you won’t improve your diversity. This needs to be a holistic approach looking at every level from early education to pipeline cultivation to meaningful change in how we all approach hiring, training and employee support.”
—Barbara Marshall, Global M&E Industry Strategy Lead, Z by HP
Barbara Marshall, HP: There are subliminal messages in everyday language, so one idea is to go through your documentation and remove language that is rooted in racism and sexism, like blacklist. Neutralize the language to not undermine any group. And on training – I find it fascinating that we tend to focus on coaching women how to be less emotional or lower our voices, and I think we should also be coaching men on how to understand and tune in to different voices and modes of working dynamics to be the best partners and team members.
Nina Stille, Intel: I agree that a lot of what we see is teaching women or underrepresented minorities how to be successful in existing environments versus rebuilding environments to make space for different ways of being and doing that diverge from the dominant culture. It’s also appealing to jump to action, but it must be grounded in your own self-awareness and understanding of how your positional power and privilege influences your world view and approach to work and to life. Operating from a place of intention and with a more intersectional lens can make an enormous difference in engaging in the work to be done.
Leona Frank, Autodesk: True allyship means making someone’s problem your own and pushing for change. As a Black woman, the emotional labor that comes with having to explain issues affecting me and my community is a huge onus and unfair expectation. Allies can take on the labor to self-educate, which keeps my back free. And when you are in a room that I am not in, where decisions get made, use your voice and push for equity issues in all aspects of hiring practices, performance reviews, opportunities for recognition and advancement. Raise critical questions and bold ideas that can really advance our DEI goals and help shape a more just and equitable future where new players have the chance to lend their unique talents – and truly be set up to thrive. We are all so much richer to be surrounded and influenced by a harmonious chorus of diverse voices.
Tune in to the full conversations with these dynamic Women Who Lead at https://www.vesglobal.org/ama
By CHRIS McGOWAN
Images courtesy of DNEG and Universal Pictures.
Robert Oppenheimer (Cillian Murphy) examines the atomic bomb prior to the weapon’s first test
Theoretical physicist J. Robert Oppenheimer’s time as the director of the Los Alamos Laboratory during World War II was arguably the most impactful job ever undertaken. Oppenheimer led a team of scientific luminaries as part of the Manhattan Project, which developed the first nuclear bombs, the most powerful weapons in history – changing the world forever. Oppenheimer felt great moral turmoil about his efforts, and when the Trinity test was successful on July 16, 1945, he thought, “Now, I am become death, the destroyer of worlds,” words from the Hindu scripture Bhagavad Gita.
In the movie, Cillian Murphy (as Oppenheimer) says, “They won’t fear it until they understand it. And they won’t understand it until they’ve used it. Theory will only take you so far. I don’t know if we can be trusted with such a weapon. But we have no choice.” After the end of World War II, Oppenheimer positioned himself against nuclear proliferation and the development of a hydrogen bomb, which put him at odds with many in the government and military. Christopher Nolan had long wanted to tell Oppenheimer’s story, and he has done so with a movie that is part biographical drama and part thriller, shot primarily in IMAX and best viewed on large-format, high-resolution movie screens.
Nolan dramatizes the frantic race of the American military to build the first A-bomb before the Nazis did (they were thought to be in the lead) and immerses the audience inside the brilliant Oppenheimer’s conflicted mind, from his deepest fears to imaginings of the quantum realm and the cosmos, and the fate of the planet. He alternated color with black-and-white footage – the first time B&W IMAX 65mm film stock has been used for a feature film. And, typically, he tried to avoid CGI – not an easy task when dealing with the first atomic bomb explosion.
In the fall of 2021, Nolan announced his attention to write and direct the film, with Universal Pictures as the distributor. Nolan’s script was based on the Pulitzer Prize-winning American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer by Kai Bird and Martin Sherwin. Filming took place February to May 2022, mostly in New Mexico. Emma Thomas, Charles Roven and Nolan produced the movie, in which Murphy portrays Oppenheimer. The star-studded ensemble cast includes Emily Blunt (Kitty Oppenheimer), Matt Damon (Leslie Groves), Florence Pugh (Jean Tatlock), Robert Downey Jr. (Lewis Strauss), Kenneth Branagh (Niels Bohr), Jack Quaid (Richard Feynman), Matthew Modine (Vannevar Bush), Tom Conti (Albert Einstein) and Gary Oldman (Harry Truman). The scientists portrayed make up a veritable 20th century hall of fame for theoretical physics.
Oppenheimer was DNEG’s eighth straight film working with Nolan, and it has been the director’s exclusive outside VFX studio starting with Inception (2010). Andrew Jackson, Production VFX Supervisor, comments, “As well as having the creative experience from years of working with Chris, DNEG also has a huge benefit when it comes to solving the technical challenges of working with IMAX resolution in a largely optical production pipeline.”
Scott R. Fisher, Special Effects Supervisor, Giacomo Mineo, DNEG VFX Supervisor, Mike Chambers, VES, VFX Producer, and Mike Duffy, DNEG VFX Producer, also helped bring the movie to life. Long-time Nolan colleague Hoyte van Hoytema handled the cinematography.
Some of the techniques that the VFX team used to produce the spectacle of nuclear fission were also used to help create the scenes that portray Oppenheimer’s inner world.
“We wanted all of the images on screen to be generated from real photography, shot on film, and preferably IMAX. The process involved shooting an extensive library of elements. The final shots ranged from using the raw elements as shot, through to complex composites of multiple filmed elements. This process of constraining the creative process forces you to dig deeper to find solutions that are often more interesting than if there were no limits.”
—Andrew Jackson, Production VFX Supervisor
Oppenheimer was Andrew Jackson’s third film with Nolan, following Dunkirk and Tenet. “During that time, I have developed a strong understanding of his filmmaking philosophy,” Jackson says. “His approach to effects is very similar to mine in that we don’t see a clear divide between VFX and SFX, and believe that if something can be filmed it will always bring more richness and depth to the work.”
Christopher Nolan adjusts IMAX camera over Cillian Murphy (portraying J. Robert Oppenheimer). Oppenheimer was shot primarily in IMAX. (Photo: Melinda Sue Gordon)
Nolan challenged the VFX artists to pull off the visual effects in Oppenheimer without any computer graphics, if possible. Jackson comments, “I spent the first three months of the project in Scott Fisher’s workshop, working with him and his SFX crew to develop the various simulations and effects. We continued working closely with the SFX crew for the duration of the shoot – in fact, it was really one FX team.”
Mineo says, “The entire project was a constant creative brainstorming process, as we had to be imaginative and experimental with the footage to visualize what was in Oppenheimer’s mind. From creating the birth of stars by the talented comp artist Ashley Mohabir, to visualizing sub-atomic chain reactions and even the ‘end of the world’ scenario after the nuclear proliferation, designed by Marco Baratto, we pushed the boundaries of creativity.”
One difficulty was capturing Oppenheimer’s ideas and imagination considering the different scientific and visual references available during the 1940s. However much the scientist was a visionary, he was also a man of his time. Mineo explains, “Concepts like the Earth seen from space or modern physics were relatively new at the time. To truly portray Oppenheimer’s mindset, we had to let go of our modern understanding and delve into his world. It was a fascinating journey.”
There were about 150 total visual effects shots in Oppenheimer, 30% of which were in-camera, according to Chambers. He notes, “Being a period piece, some contemporary anachronisms were necessarily cleaned up, but only when glaringly obvious. As a general rule, pure fixes were used very sparingly.”
Mineo says, “We embraced old-style techniques such as miniatures, massive and micro explosions, thermite fire, long exposure shoots and aerial footage. The majority of the VFX work was based on these elements, without any reliance on CGI and mainly involving compositing treatments.”
Oppenheimer and his scientific team worked at the Los Alamos National Laboratory, located some 30 miles from Santa Fe, New Mexico. Mineo comments, “Surprisingly, we needed minimal interventions for Los Alamos as the town had been re-built almost entirely. Our focus primarily involved minor clean-up and retouching work for specific elements such as the tower and the bomb.”
Chambers notes, “Some of the magic was intercutting the magnificent set that [Production Designer] Ruth De Jong designed and built with some of the actual locations in Los Alamos itself, including the Lodge and Oppenheimer’s actual house.”
Placing a premium on practical effects, the VFX team generated a library of idiosyncratic, frightening and beautiful images to represent the thought process of Oppenheimer, who was at the forefront of the paradigm shift from Newtonian physics to quantum mechanics, looking into dull matter and seeing the extraordinary vibration of energy that exists within all things.
Atomic Energy Commission Chairman Lewis Strauss (portrayed by Robert Downey Jr.) carried a grudge against Oppenheimer. The film alternated color with black-and-white footage, and it marked the first time B&W IMAX 65mm film stock was used for a feature film. (Photo: Melinda Sue Gordon)
Christopher Nolan lays out the scene for Cillian Murphy, who portrays Oppenheimer. (Photo: Melinda Sue Gordon)
The VFX team was in sync with DP Hoyte van Hoytema. Jackson says, “Throughout the production process we worked closely with Hoyte experimenting, testing and developing ideas to illustrate the concepts in the script. On this project we worked with him and the camera department to develop an IMAX probe lens specifically for some of the FX elements. Hoyte also built some extremely powerful LED lights that we used in the tank shoot.”
The Trinity nuclear explosion was done without any CGI. “We knew that this had to be the showstopper,” Nolan told the Associated Press. “We’re able to do things with picture now that before we were really only able to do with sound in terms of an oversize impact for the audience – an almost physical sense of response to the film.”
A young Oppenheimer. The filmmakers had to be imaginative and experimental with the footage to visualize what was in the physicist’s visionary and conflicted mind.
Explains Chambers, the team strove to “determine how best to illustrate various aspects of a nuclear explosion without relying on CG VFX, stock footage, or actually setting one off.” The Trinity test was recreated using real elements. Mineo remarks, “We only had the original footage of the Trinity explosion as a precise reference. Chris Nolan was determined to keep the VFX grounded in reality and maintain the raw feeling of the actual footage.
We had to find the right balance, striving to be minimal with our treatments while ultimately recreating such a massive event. Our talented team of compositing artists, including Peter Howlett, Jay Murray, Manuel Rivoir and Bensam Gnanasigamani, did an exceptional job working for months to create this iconic moment in history.”
Chambers adds, “Though there were a few shots using 2D compositing in order to utilize multiple elements, all elements were acquired photographically. For some effects, large pyrotechnical explosions were set off in the desert, however for some views we also used a variety of cinematic tricks, including cloud and water tanks, forced-perspective miniatures and high-speed photography for scale.”
For Chambers, “One of the big challenges for me specifically was the logistics of experimenting and shooting our elements and gags while traveling to all the locations and working alongside the main unit throughout principal photography. Though we didn’t have the luxury of a fixed stage for our work, the advantage of having the director at hand to review and expand on our ideas in real-time was priceless.”
Oppenheimer (Cillian Murphy) in an emotional moment with his wife (Emily Blunt) at Los Alamos. (Photo: Melinda Sue Gordon)
General Leslie Groves (Matt Damon), who was in charge of the Manhattan Project, confers with Oppenheimer (Cillian Murphy) at Los Alamos. (Photo: Melinda Sue Gordon)
Oppenheimer, in the searing light of his creation, at the Trinity test in New Mexico. The explosion was recreated using real elements and no CGI. Nolan wanted to take the audience “in the room” when the button is pushed
Theoretical physicist Edward Teller (played by Benny Safdie) at the Trinity test. (Photo: Melinda Sue Gordon)
Director Chris Nolan with his long-time colleague, DP Hoyte van Hoytema, who is wielding an IMAX camera. The visual effects team worked with van Hoytema and the camera department to develop an IMAX fish-eyed probe lens specifically for some of the VFX elements. (Photo: Melinda Sue Gordon)
Some examples of Oppenheimer gags, according to Chambers, included “a special rig designed to spin beads on multiple axes, a special rig designed to recreate the effect of an implosion, and a special tabletop rig to recreate the effect of a ground-based shockwave. Some of these were relatively simple setups and others were more complex. Some were used as shot in-camera and others were combined with traditional elements to complete the shots. As the nature of our approach was primarily physical and tangible, Scott [Fisher] and the SFX crew helped to devise and create the various rigs and gags that we wanted to shoot.”
Director Nolan also sought a deeper immersion by employing IMAX, his preferred format. “The headline, for me, is by shooting on IMAX 70mm film, you’re really letting the screen disappear. You’re getting a feeling of 3D without the glasses,” Nolan told the Associated Press.
Chambers notes, “We used both IMAX 65mm and [Panavision Panaflex] 5perf 65mm film formats. 5perf was sometimes used for more intimate dialogue scenes as the IMAX cameras are not as quiet on the set. On the VFX unit, we also used some 35mm for extreme high-speed photography.”
Mineo adds, “IMAX is undeniably the most cinematic format, but it poses unique challenges for VFX due to its incredibly high resolution. At DNEG, we have a well-designed pipeline that caters to the IMAX workflow, thanks to our experience collaborating with Chris Nolan since our work on Batman Begins. As for the black-and-white footage, our color science team, led by Mario Rokicki, seamlessly adapted our pipeline to accommodate it.”
Jackson remarks, “The two biggest challenges were also the most rewarding aspects of working on this movie. Firstly, the script describes thoughts and ideas rather than specific visual images. This was both exciting and challenging as we searched for solutions we could build and shoot that were both inspired by the ideas in the story and were visually engaging.”
Continues Jackson, “The second challenge was the set of creative rules that we imposed on the project. We wanted all of the images on screen to be generated from real photography, shot on film, and preferably IMAX. The process involved shooting an extensive library of elements. The final shots ranged from using the raw elements as shot, through to complex composites of multiple filmed elements. This process of constraining the creative process forces you to dig deeper to find solutions that are often more interesting than if there were no limits.”
Mineo concludes, “One of the highlights was the opportunity to learn a different way of creating VFX. Understanding how Chris Nolan thinks and approaches his movies allowed us to bring his unique philosophy into our way of working. Initially, it presented some challenges and limitations, but the result is unquestionably real, 100% believable, and ultimately more satisfying.”
View Full Event Page for details
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.