By TREVOR HOGG
Images courtesy of Netflix.
By TREVOR HOGG
Images courtesy of Netflix.
One of the hardest environments to execute was the dinning room of the Kerberos.
Trading the sky-high adventures of The Aeronauts for the oceanic travels of 1899 is Visual Effects Supervisor Christian Kaestner, who oversaw the digital effects and virtual production for the science fiction mystery. The story unfolds on a passenger steamship traveling across the Atlantic Ocean from Southampton, U.K. to New York City at the turn of the 20th century that discovers a derelict sister vessel that has long since been lost at sea. Unlike most episodic productions where there are multiple directors and a showrunner, Co-Creator/Executive Producer/Co-Showrunner Baran bo Odar helmed the eight episodes.
“Everything in the art department is built in Rhino 3D these days, so Udo would give us those assets which were then put into Unreal Engine without textures. Bo could put on the VR glasses, and we decided on the lenses, knew that the camera was going to be an ARRI ALEXA Mini LF, put in the film backs and laid out some cameras. We were able with a snapshot tool to do a camera pass of the scene that gave you a grey-scale storyboard. Then Bo went away and storyboarded other sequences because VR scouting is quite time-consuming. I marked up the storyboards determining what was going to be visual effects and virtual production.”
—Christian Kaestner, Visual Effects Supervisor
Scenes were broken down into a massive flowchart categorized by time of day and mood, because every episode is one day that was either overcast, drizzle, heavy rain, foggy, morning, afternoon, evening or night.
An exciting aspect of the project was the opportunity to delve into the virtual production methodology. “When I started, we had rough outlines that were four pages long for each episode and worked closely with [Production Designer] Udo Kramer to figure out what needs to go where, what was going to be virtual production and set,” states Kaestner. “I was impressed that Bo and Jantje Friese [Co-Creator/Executive Producer/Co-Showrunner] knew in their heads exactly what they wanted. Bo [Odar] had a book of things that he liked. Because of COVID-19 we couldn’t be all in the room at the same time, so I even used some online whiteboards and asked, ‘Hey, about these things?’ Bo put up his stuff, and we would mark up stuff that he liked. We were able to work on the mood, palette and desired lighting early on.”
HDRI plate photography was captured for the skies.
Martin Macrae, who is Head of Art Department at Framestore, illustrated with Odar some key moments and visuals that informed the production design. “Everything in the art department is built in Rhino 3D these days, so Udo would give us those assets which were then put into Unreal Engine without textures,” Kaestner remarks. “Bo could put on the VR glasses, and we decided on the lenses, knew that the camera was going to be an ARRI ALEXA Mini LF, put in the film backs and laid out some cameras. We were able with a snapshot tool to do a camera pass of the scene that gave you a grey-scale storyboard. Then Bo went away and storyboarded other sequences because VR scouting is quite time-consuming. I marked up the storyboards determining what was going to be visual effects and virtual production.”
“It was quite a big technical challenge to get the ocean into Unreal Engine looking the way we wanted it to appear. For everyone, it was about getting the mood across, so it was important that the scene and lighting represent what it was going to be later on. That’s why the pre-lighting process with [Cinematographer] Nikolaus Summerer was so crucial as well. Unreal Engine was not ready to get the final surface detail in real-time, but what we got right were the surface dynamics and the speed of travel. We were traveling between 18 and 30 knots. There were presets for that, so when the ocean was close up and out of focus we could keep it in Unreal Engine.”
—Christian Kaestner, Visual Effects Supervisor
With the action unfolding in the Atlantic Ocean, water was a dominant visual element. “It was quite a big technical challenge to get the ocean into Unreal Engine looking the way we wanted it to appear,” Kaestner states. “For everyone, it was about getting the mood across, so it was important that the scene and lighting represent what it was going to be later on. That’s why the pre-lighting process with [Cinematographer] Nikolaus Summerer was so crucial as well.” Perception of the height and scale of water is determined by the surface detail. Explains Kaestner, “Unreal Engine was not ready to get the final surface detail in real-time, but what we got right were the surface dynamics and the speed of travel. We were traveling between 18 and 30 knots. There were presets for that, so when the ocean was close up and out of focus we could keep it in Unreal Engine.”
A physical front and rear deck were constructed by Production Designer Udo Kramer.
Reducing the need to redress sets for different setups was a rotating turntable that could be rotated 360 degrees within two minutes.
Theatrical stage techniques came in handy. “When we were doing the scouting, we knew the path of camera, and Bo and Nick wanted to shoot parallel with the camera so you would have the wide and closeup next to each other,” Kaestner remarks. “That meant to do a scene they would have to turn around quite a lot. The solution was to have a turntable because otherwise you would have to redress everything all of the time. That was so fun. To certain degree we were with this new toy and asking, ‘What can we do with it?’ But at the same time there was the desire to be efficient, on budget and on time. That turntable was such a great way because we had so many angles on the decks that had to be covered. To rotate 360 was like a two-minute turnaround.”
Footage was shot with a specific LUT and brighter than what was streamed.
“It was about how far we can push the game engine to give us something that is not plate photography, but a CG asset that holds up being believable. That was always going to be a challenge because we’re so used to physically-based rendering these days where everything looks so real. In that sense, the slightly darker palette and indirect muted lighting helped us because we could bake a lot of that in.”
—Christian Kaestner, Visual Effects Supervisor
HDRI plate photography was captured for the skies, Kaestner explains. “We broke down all of the scenes in the episodes into a massive flowchart, categorized by time of day and mood because every episode is one day that was either overcast, drizzle, heavy rain, morning, afternoon, evening or night.” To protect the LED screens from atmospherics, a rain rig was created by the special effects team led by Gerd Nefzer. “There were five to seven rain jets on top that we could turn on and off based on where they were shooting,” Kaestner adds. “We only had rain where it was needed for the shot. The engine and boiler rooms did not have any practical fire. We did have lots of smoke, especially fog, in Episode 103.”
Footage was shot with a specific LUT and brighter than what was streamed. “If you don’t have enough light in a LED volume, it ends up noisy, so we needed to be quite brighter onset, but when you looked through the camera with all of the settings that we did then that was the mood Bo wanted,” Kaestner observes. “Steffen Paul [Lead Digital Colorist] did test grades with Da Vinci Resolve on set and sent it over to Netflix, which said, ‘This is really dark!’ There are some dark scenes especially at night, but the show has a nice cinematic look to it.” Over 1,000 visual effects shots for the eight episodes were divided between Framestore, DNEG and Pixomondo. “It was about how far we can push the game engine to give us something that is not plate photography, but a CG asset that holds up being believable. That was always going to be a challenge because we’re so used to physically-based rendering these days where everything looks so real. In that sense, the slightly darker palette and indirect muted lighting helped us because we could bake a lot of that in,” Kaestner notes.
The Unreal Engine had to be pushed far enough to produce something that is not plate photography, but a CG asset that holds up being believable.
A variant of the Kerberos steamship is the Prometheus. “Everything on the Kerberos was shot first and then we damaged and dressed it up as the Prometheus,” Kaestner remarks. “For the steamships we had some guidance like RMS Lusitania, which was from the period. We turned it black, changed the colors, and talked with Udo about deciding where the engine room was going to be. It was crucial that Udo came up with a physical front and rear deck because we knew from the view of the deck how much of the ship you would see. We built it in a way that we had enough detail to capture in-camera, but [it] would run in real-time with the water and skies.” Prometheus as a spaceship was always going to be a visual effects build for the exterior shots. “Bo wanted something that was in the direction of Alien-esque rather than ISS or NASA ship,” Kaestner says. “It had to look like a submarine with dark colors, weathered and used but still plausible. That’s why we ended up with the rotating mechanism and artificial gravity.”
“We got a lot out of it [virtual production], especially for episodic, because there were so many sets and scenes that we went back to that could be replicated and, with small modifications, cover the dining room of the Kerberos and have a variant of the dining room in the Prometheus. We had so many scenes on the deck with consistent lighting over three weeks. … I don’t think virtual production solves all of the problems, but when used correctly it can be powerful.”
—Christian Kaestner, Visual Effects Supervisor
One particular environment proved difficult to execute. “The dining room did not run in real-time the day before the shoot,” Kaestner reveals. “You push your assets to the limit to the last minute. Once we had done the pre-light with Nick, we knew exactly what he wanted in terms of look, and that’s when you can do optimizations such as, ‘This area is not going to be shot often, so maybe we can simplify the geometry here.’
The engine and boiler rooms did not have any practical fire.
Essential atmospherics were fog and rain, with a special rig created for the latter by the special effects team led by Gerd Nefzer to protect the LED screens.
Virtual production if used wisely can be extremely efficient. “We got a lot out of it,” Kaestner says, “especially for episodic, because there were so many sets and scenes that we went back to that could be replicated and, with small modifications, cover the dining room of the Kerberos and have a variant of the dining room in the Prometheus. We had so many scenes on the deck with consistent lighting over three weeks.” The lack of greenscreen is beneficial when it comes to editorial turnovers. It’s easier for the director and editor to put something together, which was a huge benefit. “I don’t think virtual production solves all of the problems, but when used correctly it can be powerful,” he concludes.
By TREVOR HOGG
Images courtesy of Crunchyroll.
A signature visual element for Makoto Shinkai, which began with his self-produced animated shorts, is the incorporation of lens flares.
With the huge box-office success of Your Name, Weathering With You and Suzume, there is no denying the international appeal of Japanese filmmaker Makoto Shinkai. His potent cinematic mix combines teenage romance with time-shifting narratives that are framed by impending natural disasters – and Suzume ups the thematic ante. Because the impact will be of catastrophic proportions, doors throughout Japan must be closed to prevent a red molten-lava worm residing in the realm of the Ever-After from entering the land and falling to earth. On her way to school, teenage Suzume Iwato encounters the mysterious Souta Munakata, who has the task of locking the vulnerable access points.
“The central theme of Suzume is the 2011 Great Eastern Japan Earthquake. Suzume being from the Tōhoku area, her entire life was uprooted in an instant. Because of this earthquake and tsunami, Suzume had to move from the east to the west, like many others. That meant her story was going to start from Kyushu, which is the most western point of Japan, and gradually she would travel east back to her hometown.”
—Makoto Shinkai, Director
Speaking through a translator, Shinkai reveals that Japan’s devastating history of earthquakes, rainstorms and tsunamis influenced where the doors were situated. “The central theme of Suzume is the 2011 Great Eastern Japan Earthquake. Suzume being from the Tōhoku area, her entire life was uprooted in an instant. Because of this earthquake and tsunami, Suzume had to move from the east to the west, like many others. That meant her story was going to start from Kyushu, which is the most western point of Japan, and gradually she would travel east back to her hometown.”
Causing environmental havoc and becoming a social-media darling is the western keystone that transforms into the Daijin.
Initially, 2D rather than 3D animation was considered, but the latter better conveyed the idea that the soul of Souta has been trapped inside a rigid chair.
The first stop on the countrywide road trip is in Ehime where Suzume meets fellow high school student Chika transporting tangerines that spill onto the road. “In 2018, Ehime had massive rainstorms that happened because of climate change, which resulted in many landslides. That was a very shocking story for us at the time,” Shinkai explains. “Then Suzume moves on to Kobe where in 1995 there was the Great Hanshin-Awaji Earthquake that hit the western side of the country. From there she travels to Tokyo where in 1923, exactly a 100 years ago, there was a massive earthquake that hit the city. Ultimately, Suzume ends up in the Fukushima area where a massive earthquake and tsunami caused a nuclear power plant to have a meltdown. Even to this day, that area around the power plant is completely quarantined and sectioned off. There are these remnants of human life where you can see people lived at one point but now can’t be inhabited. Thinking about where Suzume was going to travel across from where she lives now on the west side, all the way to her hometown of Tōhoku in the east, the disaster-stricken areas almost presented themselves and informed where she was going to travel.”
Every shot was color-corrected differently to create the lighting nuances normally associated with live-action.
The color palette ranges from vibrant to monochromatic to convey the desired mood.
Doors to the Ever-After are found in various forms and ruined places throughout Japan, such as a spa, school and amusement park.
“Ultimately, Suzume ends up in the Fukushima area where a massive earthquake and tsunami caused a nuclear power plant to have a meltdown. Even to this day, that area around the power plant is completely quarantined and sectioned off. There are these remnants of human life where you can see people lived at one point but now can’t be inhabited. Thinking about where Suzume was going to travel across from where she lives now on the west side all the way to her hometown of Tōhoku in the east, the disaster-stricken areas almost presented themselves and informed where she was going to travel.”
—Makoto Shinkai, Director
Suzume accidentally releases the western keystone that helps to contain the worm, and it subsequently takes on the form of a mischievous daijin (a cat that can speak) that casts the soul of Souta into a three-legged wooden chair, a childhood birthday present made by her mother who was killed by a tsunami. “With respect to the chair, we initially explored a 2D hand-drawn visual expression,” Shinkai reveals. “We ran some tests and had animators animate a few different scenes, but I wasn’t happy with the result because it was a movement that I had seen before onscreen reminiscent of Disney’s Beauty and the Beast where the china plates and cups are moving around. Life was almost breathed into these inanimate objects, which in the case of Souta was something I didn’t want to do. I wanted it to feel as though his soul was trapped inside of something rigid, so we started experimenting with 3D and ultimately shifted the entire process for that character into a 3D pipeline because I wanted that solid object to appear and feel solid and rigid, but at the same time express fast and swift movements.”
When a tsunami resulting in the 2011 Great Eastern Japan Earthquake killed her mother, Suzume and her aunt settled in the western city of Kyushu.
Aspiring teacher Souta carries on a family tradition by traveling throughout Japan closing vulnerable doors connected to the Ever-After.
With his bigger productions, Makoto Shinkai has been able to focus on character animation.
Souta is threatened by the tendrils of the worm, which was always intended to be a combination of 3D and 2D animation in order to take advantage of both techniques.
Early on, the decision was made that the worm would be animated in 3D. “This is because in Princess Mononoke, by Hayao Miyazaki and Studio Ghibli, there is this cursed god that appears in the film, which is a bunch of different worms that consume giant monsters and squirm around,” Shinkai states. “That was an original inspiration for the imagery of the worm that I wanted to create. But what we already knew at the studio was that it was going to be impossible to exceed what Miyazaki had achieved in that visual expression of the worm, even in 1997 when Princess Mononoke came out. The level of experience that Studio Ghibli had, combined with the budget they were able to access, created a potent combination we knew taking on would not be a good result for us. Instead, we animated in 3D from the onset and relied a lot on what the software can do right now. For example, some of the physics simulations. We did a lot of liquids simulating and particles, adding and complimenting that with hand-drawn animation, which is ultimately how we ended up with the visual expression of the worm that you see on the screen.”
“[I]n Princess Mononoke, by Hayao Miyazaki and Studio Ghibli, there is this cursed god that appears in the film, which is a bunch of different worms that consume giant monsters and squirm around. That was an original inspiration for the imagery of the worm that I wanted to create. But what we already knew at the studio was that it was going to be impossible to exceed what Miyazaki had achieved in that visual expression of the worm… Instead, we animated in 3D from the onset and relied a lot on what the software can do right now. For example, some of the physics simulations. We did a lot of liquids simulating and particles, adding and complimenting that with hand-drawn animation, which is ultimately how we ended up with the visual expression of the worm that you see on the screen.”
—Makoto Shinkai, Director
Interestingly, a lot of anime makes use of antiquated flip phones, but Makoto Shinkai effectively incorporates smartphones and the impact of social media into the storyline of Suzume.
Depending on whether the shot is wide, mid or a closeup of Suzume, the color and lighting of her eyes vary accordingly.
Nuances, such as sweat rolling off the face of Tamaki Iwato as she transports her niece Suzume on a bicycle, added believability to the animated performances.
The first stop on the road trip across Japan is in Ehime where Suzume meets fellow high school student Chika transporting tangerines.
“Depending on what was in the background, where the sun was hitting and how tight the shot was, I adjusted the color palette to make sure we were evoking the mood that I wanted. You could take a torso-up shot of Suzume and the color of her eyes are different from a super-tight shot where you just see her eye. This isn’t normally something you would do in animation of this scale. But that produces a similar effect to live-action shots where cut by cut you get nuances of what the lighting is doing, and even the skin tone and mood will change from shot to shot. Instead of blanket-saying ‘This is Suzume’s color,’ I wanted to control shot by shot how that would look, and that’s how we resulted in what you see on the screen.”
—Makoto Shinkai, Director
Extensive attention was paid to the color palette and lighting schemes. “In regards to the color palette, or more specifically the lighting if you will, I was careful and intentional as to how I wanted to depict that,” Shinkai remarks. “Throughout the film there are over 2,000 shots, and I personally oversaw the coloring and lighting for each one of them. Normally for animation of this caliber, Suzume would have her own color palette and perhaps two other variants for how she would look in the morning, noon, where there is strong sunlight, and night when there isn’t as much light. I’m sure that is a result of trying to optimize the process of animating such a massive-scale movie. But I wanted to step outside of the that box and go shot by shot. Depending on what was in the background, where the sun was hitting and how tight the shot was, I adjusted the color palette to make sure we were evoking the mood that I wanted. You could take a torso-up shot of Suzume and the color of her eyes are different from a super-tight shot where you just see her eye. This isn’t normally something you would do in animation of this scale. But that produces a similar effect to live-action shots where cut by cut you get nuances of what the lighting is doing, and even the skin tone and mood will change from shot to shot. Instead of blanket-saying ‘This is Suzume’s color,’ I wanted to control shot by shot how that would look, and that’s how we resulted in what you see on the screen.”
The worm that threatens Tokyo was inspired by Princess Mononoke.
Young Suzume upon exiting the realm known as the Ever-After and surviving the tsunami that killed her mother.
Ruins found in the Ever-After reflect the devastation found in the land of the living.
For the first time, Suzume witnesses a mysterious supernatural entity known as the worm attempting to break through a door connected to the Ever-After.
Despite being known way beyond the borders of Japan, Shinkai has not forgotten his humble beginnings when he only had himself and a Power Mac G4, LightWave, Adobe Photoshop 5.0, Adobe After Effects 4.1 and Commotion DV 3.1. “I tend to use water, rain and lens flares a lot, and that has to do with my background and how I became a director,” Shinkai reflects. “I don’t have an animator background. I had never worked and had any experience in a major studio until later on. Self-producing my animated shorts [Voices of a Distant Star, She and Her Cat: Their Standing Points and Other Worlds] is what led me to rely on some of those techniques. When you’re creating animation alone the heaviest lifting in that context is the character animation, so I tried to figure out different methods of storytelling. Sometimes I would rely on 3D CG or cut to a blue sky. Part of those various elements that I had access to was rain or lens flares because in Adobe After Effects you don’t have to hand-draw every single droplet of rain. I could rely on the PC to do some of the lifting for me in terms of that visual expression. The same goes for the lens flares. I would say that this is almost a by-product of my unique career path [he was a video game animator at Nihon Falcom] that enabled me as a storytelling tool to rely on certain techniques that freed me up from the character animation. It’s the result of not relying solely on character animation as a form of expression or to drive the story forward that you still see those artifacts in my movies today.”
By CHRIS McGOWAN
Images courtesy of Universal Pictures.
Santa (David Harbour) wields his hammer against mercenaries holding a wealthy family hostage during Christmas time. Director Tommy Wirkola sought to keep the visuals warm and seasonal, despite the mayhem.
It’s a snowy Christmas Eve and a crack team of mercenaries has taken a wealthy and powerful family hostage in their luxurious mansion. Led by the self-named “Scrooge” (John Leguizamo), the criminals are well organized but have not anticipated a crucial detail: Santa Claus (David Harbour) has also arrived there while making his rounds with sleigh and reindeer. Furthermore, this is not your traditional Santa; rather, he is hard-drinking, gluttonous and tattooed, with a violent past – before becoming St. Nick, he was a Viking warrior named Nikamund the Red, wielder of a fearsome hammer. And when the night’s mayhem ensues, his reindeer flee, leaving him stranded. While there, the cynical Santa takes pity on a sweet little girl, Trudy (Leah Brady), who is among the hostages, and he decides to rescue her.
Tommy Wirkola directed the 87North Productions film, distributed by Universal Pictures. Beverly D’Angelo, Alex Hassell, Alexis Louder and Edi Patterson were also in the cast. Crafty Apes, which provided the VFX, came to be involved because “we have been fortunate to work on multiple projects with Kelly McCormick and David Leitch and their 87North shingle over the years,” comments Matt Akey, Chief Marketing Office and Executive Producer at Crafty Apes.
The self-named villain “Scrooge” (John Leguizamo) ties up Santa with Christmas lights in another upending of seasonal convention. (Photo: Allen Fraser)
Violent Night required some unusual VFX, including a digi-double Santa, CG sleigh and reindeer, magical travel through fireplaces, Christmas weapon extensions, snow-augmented environments and an AR-like naughty-and-nice list. An action-comedy for the holiday season, the film has elements of Die Hard and Home Alone, but is over-the-top bloody and definitely not for children. Yet, Violent Night manages to achieve a cheery seasonal look despite a whole lot of gruesome violence. “The film’s director, Tommy Wirkola, was very clear in his direction and what the movie should feel like. Despite the dark tones and humor, the essence of the film still needed to be a feel-good Christmas movie. We always relate Christmas to magic. It is what gives the film a heart, and that’s what we went after. We kept the visuals bright and sparkly,” says Crafty Apes VFX Producer Neh Jaiswal.
“The film’s director, Tommy Wirkola, was very clear in his direction and what the movie should feel like. Despite the dark tones and humor, the essence of the film still needed to be a feel-good Christmas movie. We always relate Christmas to magic. It is what gives the film a heart, and that’s what we went after. We kept the visuals bright and sparkly.”
—Neh Jaiswal, VFX Producer, Crafty Apes
Santa and reindeer are on the roof, with a boost from bluescreen.
Crafty Apes VFX Supervisor Aleksandra Sienkiewicz adds that Wirkola “wanted colors to pop, be bright, red, white and magical. Same with the set design, there is lots of warmth on set coming from the fireplaces, lights and interior design to make you feel cozy and embrace the Christmas ambiance.” Wirkola and Co-Producer Leitch also wanted the fights to have some Christmas spirit, she notes. “All of Santa’s weapons involved Christmas ornaments, candy canes, Christmas stars and Santa’s sack.”
Sienkiewicz continues, “We started look dev early in pre-production to ensure we set up the tone and look with Tommy while we were on set. We had our team back in [our] Vancouver and L.A. studios working on early FX sims, led by FX Supervisor Andrew Furlong and FX artists Jaclyn Stauber and Árni Freyr Haraldsson, while we were shooting back in Winnipeg. That was great to have constant feedback early in the game before we got all the plate turnovers.”
Crafty Apes’ CG team led by Jon Balcome was in charge of generating eight reindeer. Industrial Pixels was responsible for scanning the reindeer.
“[Director Tommy Wirkola] wanted colors to pop, be bright, red, white and magical. Same with the set design, there is lots of warmth on set coming from the fireplaces, lights and interior design to make you feel cozy and embrace the Christmas ambience. All of Santa’s weapons involved Christmas ornaments, candy canes, Christmas stars and Santa’s sack.”
—Aleksandra Sienkiewicz, VFX Supervisor, Crafty Apes
Sparkly CG magic dust had to be created for Santa’s fireplace entrances and exits. Sienkiewicz recalls, “Back on set, we made sure we scanned all the rooms in the mansion, as there were several chimneys that Santa was escaping from. Our On-Set Supervisor, Adam Wagner, did a fantastic job doing photogrammetry that helped us with tracking and proper particle interactions. Industrial Pixels was responsible for Santa scans that were used to create the Santa digi-double, which we used to emit the particles and integrate with plate Santa. All the simulations were done in Houdini and composited in Nuke with the help of Point Render and Nuke particles by our amazing team led by Mark Derksen, Compositing Supervisor.”
Reindeer – practical and CG – posed a special challenge. The VFX team brainstormed several shots with reindeer with different levels of complexity.
Reindeer – practical and CG – posed a special challenge. Sienkiewicz explains, “After our first meeting with the pre-production team, director and producers reviewing the script and storyboards, we tried to figure out the logistics and the most cost-effective yet visually good-looking approach. There was lots of brainstorming involving several shots with reindeer, with different levels of complexity. From the beginning, we knew we needed two or three real reindeer. Time was pressuring us. Little did we know the reindeer were going to lose their antlers in February, so we needed all reindeer scenes to be scheduled as soon as possible. Since our main unit was shooting in Winnipeg, the only place in Canada to accommodate our request was Calgary. We decided to have a second unit in Calgary led by Wagner, who was responsible for shooting all the reindeer elements, reference photos and videos, textures, HDRI and light studies for all the shots we broke down earlier in storyboards.”
Compositing Santa and sleigh against the moon.
According to Sienkiewicz, “This material was very important to us, especially for the animation team led by Burke [Roane] and Trey Roane, to have visual references of the reindeer movement, walk cycles and overall body and head movement. Industrial Pixels was responsible for scanning the reindeer to help us with the CG models. Crafty Apes’ CG team led by Jon Balcome was in charge of generating eight reindeer, reins, sleigh and a Santa digi-double, which was fairly challenging,” explains Sienkiewicz.
“We started look dev early in pre-production to ensure we set up the tone and look with [director] Tommy [Wirkola] while we were on set. We had our team back in [our] Vancouver and L.A. studios working on early FX sims, led by FX Supervisor Andrew Furlong and FX artists Jaclyn Stauber and Árni Freyr Haraldsson, while we were shooting back in Winnipeg. That was great to have constant feedback early in the game before we got all the plate turnovers.”
—Aleksandra Sienkiewicz, VFX Supervisor, Crafty Apes
Sienkiewicz adds, “Seeing all the characters coming to life and matching our on-set reindeer was very satisfying. In terms of Santa’s sleigh, the production team designed it, but we built our own CG sled asset based on the photogrammetry and LiDAR scan used for our in-air flying shots. Tommy always had a specific mindset about how the reindeer should move; he always described it as the reindeer threading in the water instead of flying in the empty void. That was always our biggest challenge – not to make the animals look cartoonish.”
Crafty Apes’ responsibilities included buildings and environment extensions, and augmenting the cabin fire in the final act.
Violent Night was filmed in Canada in the dead of winter, with temperatures ranging mostly from -15°C to -40°C. “Most of the movie was shot in cold Winnipeg,” Sienkiewicz says. “Interior shots were shot on stage at Manitoba Film Studios [in Winnipeg]. We had several rural locations in the middle of nowhere around Winnipeg and Calgary, which were pretty cold and miserable, but we still managed to have so much fun.”
The snow was mostly cooperative. Sienkiewicz comments, “I must say we were pretty lucky with the locations during the shoot in Winnipeg, so lots of the snow was practical. The sequence that required the most VFX was the end act, where Santa and Scrooge fight in front of the cabin ruins.” Due to extremely cold temperatures, “we decided to replicate the cabin in the studio and do set extensions behind the cabin where the snowmobile crashed. Adam Wagner took reference photos and plates that we stitched together and created a panorama that we re-projected on several cards in 3D space to create the environment behind it.” Jaiswal recalls, “We did a good amount of barn and mansion extensions, but most of the wintry landscape was scouted and on set. As nature intended, there were some no-snowfall days during the shoot period, and some days had a fair amount of snowfall. We had to add in falling snow in a number of shots at the end sequence to maintain continuity.” Cold weather was also a factor in other ways. “Having iPads and cameras dying all the time, tracking markers not sticking to the bluescreen or our feet turning into icicles was a struggle,” Sienkiewicz explains.
“From the beginning, we knew we needed two or three real reindeer. Time was pressuring us. Little did we know the reindeer were going to lose their antlers in February, so we needed all reindeer scenes to be scheduled as soon as possible. Since our main unit was shooting in Winnipeg, the only place in Canada to accommodate our request was Calgary. We decided to have a second unit in Calgary led by [On-Set VFX Supervisor Adam] Wagner, who was responsible for shooting all the reindeer elements, reference photos and videos, textures, HDRI and light studies for all the shots we broke down earlier in storyboards.”
—Aleksandra Sienkiewicz, VFX Supervisor, Crafty Apes
Santa’s naughty-and-nice list looks like an ancient scroll equipped with AR. The art team incorporated Santa’s ancient Viking roots, and utilized runes, old-style calligraphic lettering and lots of magical particles to bring the scroll to life.
A sequence involving Santa and a snowmobile flying through the air and crashing into the snow was “super fun to shoot,” Sienkiewicz says. “It was a mix of a location shot in rural Winnipeg and an in-studio shoot, since we couldn’t shoot a snowmobile crashing outside due to logistics and freezing temperatures. The team wanted to shoot as much as possible in the camera, so the special effects team was responsible for flipping the snowmobile and the Crafty team was in charge of CG Santa flying across, set extensions to the cabin ruins and snow enhancements. Firstly, Industrial Pixels scanned David Harbour and the snowmobile, providing us with textures and reference images. Our CG team was responsible for building a Santa digi-double that was used in several shots. We object-tracked the snowmobile and passed that to our animation team, who did the magic!”
While a lot of the fight scenes were shot practically, there were significant CG components for some scenes, according to Jaiswal, who comments, “The fight sequence inside the barn involved a lot of CG blood, a CG blade for the snowblower as one of the mercenaries gets pulled in it, [and] there are a couple of shots with a CG icicle during the scene where Linda is fighting off the mercenary, along with other sequences. Mark Derksen, our Comp Supervisor, did an amazing job leading the team to make it look seamless.”
The mansion had an impregnable vault, and it was necessary to replace and animate its doors. The CG team led by Jon Balcome and Sean Richie had to match CG to practical vault rings.
Sienkiewicz adds, “We had an amazing prop team, and most of the weapons were practical, but we needed to augment them in some cases whenever it was not safe for the actors. We were responsible for knife extensions, flying weapons, candy cane extensions and adding CG nails to Gingerbread’s mouth in the Home Alone sequence. We were in charge of lots and lots of blood and wound enhancements. Tommy Wirkola was very particular about how the blood should look, so it was very important for us how we approached it.”
Crafty Apes was also in charge of buildings and environment extensions and augmenting the fire in the cabin ruins in the end act, according to Sienkiewicz. In addition, “the Viking environment shots were built from scratch. Tommy wanted to include his Norwegian roots in the movie, so with the help of our amazing DMP artist, Karlie Rosin, and the compositing team led by Mark Derksen, we created a mountainy Nordic environment. All the environment is DMP reprojections in 3D space, with lots of practical and FX embers, fire and smoke composited by Aragon Pawson.”
For Santa’s naughty-and-nice list, which looks like a high-tech ancient scroll, the production aimed to create something simple-looking and aesthetically pleasing that incorporated Santa’s ancient Viking roots, says Sienkiewicz. “Firstly, we developed several concepts to help establish the look with our talented DMP artists Katrina Chiu and Ivo Horvat before we started doing any FX work, which was done with Houdini. Tommy was going for the magical, warm, organic see-through look that would tie seamlessly into the plot and other effects visible in the movie. We decided to introduce runes, old-styled calligraphic lettering and lots of magical particles that bring the scroll to life.”
Bluescreen gives way to flaming destruction surrounding St. Nick in a flashback, to when he was a Viking warrior named Nikamund the Red, wielder of a fearsome hammer.
Sienkiewicz adds, “Lastly, the [house] vault – there were two shots where we needed to replace and animate the vault doors to spin and lock them in place. The CG team led by Jon Balcome and Sean Richie did an amazing job matching CG to practical vault rings. We were lucky to have all the data from the set and art department sketches to help us match the vault 1:1.”
Violent Night as a show grew in size for Crafty Apes. Jaiswal recalls, “The VFX segment almost doubled in terms of shots and potential complexity. So from the VFX standpoint, time was definitely a challenge. We had but a few months to deliver the show. There was constant communication with clients about upcoming deadlines and targets. But the team came through, and we were able to deliver all of our 300+ shots in a timely manner.”
By EVAN HENERSON
Photos by Doug Scroggins courtesy of Scroggins Aviation, except where noted.
The Scroggins team works at assembling the escape vessel by attaching the plastic-formed panels and adding the control boxes to the inner walls.
Worker cleaning the surface of a CNC’d 20-lb. Persian board foam part before vacuum-forming it out of ABS sheets.
Leave it to an aviation company that supplies airplanes and helicopters to the movies to help Adam Driver’s interplanetary explorer Mills take flight. Without an assist from Scroggins Aviation Mockup & Effects, the hero of the film 65 might have found himself stuck on a doomed planet, victim to rampaging dinosaurs, an apocalyptic media shower or both.
“It just always bothered me [the vehicle cockpits from 1977’s Star Wars], and so the one thing I want to do is create something that was really unique and different that looked like it was off-world, so we did just that.”
—Doug Scroggins, Founder/CEO, Scroggins Aviation Mockup & Effects
To this point, the Las Vegas-based Scroggins Aviation has been creating aviation mockups and effects for a range of movie and TV productions with a diverse slate of credits ranging from Iron Man 3 and Manifest all the way up through Spider Man: No Way Home and Black Adam.
Vacuum-forming the parts.
But while they have created countless cockpits and choppers, the one genre largely absent from Scroggins’ output has been craft for science fiction films, with the 2018 Hulu series The First being – fittingly – the first time that the company was enlisted to mock up an actual spacecraft.
“We have been looking to do more in the science fiction world,” says company Founder and CEO Doug Scroggins. “For The First, we built an Orion space capsule, and the detail was just out of this world. 65 was a major take-on for us, and I was really honored to do it.”
Interior of the escape vessel shows plastic panels in place, painted in primer and ready for color. Electrical wiring was run to the key locations for monitors and lighting.
Overview of the escape vessel in its final stage of assembly at Scroggins Aviation’s shop.
In 2020, Scroggins was contacted by longtime friend and professional colleague, Kevin Ishioka, the Production Designer on 65 (which then carried the working title Zoic). The original request was for portions of the mothership including the airlock doors, control boxes, panel and one crew seat.
Scroggins takes pride in the detail of the interiors of its craft. An unabashed Star Wars fan, Scroggins acknowledges that the Home Depot-ish look of some of the vehicle cockpits from the 1977 classic have always been irksome. “It just always bothered me,” says Scroggins, who added that he would one day love to work on a Star Wars film, “and so the one thing I want to do is create something that was really unique and different that looked like it was off-world, so we did just that.”
“I’m thinking, ‘Oh, that [the spinning crew seats] changes everything. There goes all the physics, the engineering and everything. Now we’re going to need to have humans in this thing, and they’re going to be encapsulated while it’s being spun around in all kinds of different directions. Now we’re going to have to have an engineer sign off on this.”
—Doug Scroggins, Founder/CEO, Scroggins Aviation Mockup & Effects
Ultimately, as production needs developed, Scroggins Aviation was also asked to take on the escape vessel and the two crew seats that are featured prominently in the end of the film. Those seats would actually end up holding people and would need to be able to take the pounding of a motion-based rig or gimbel. Then came another wrinkle. The script called for the rig to be on a gyro platform that would enable the characters to be spun around. And that, says Scroggins, is where things started to get interesting.
Escape vessel onstage, placed in a special effects rig. (Photo: Kevin Ishioka)
Inside the escape vessel cockpit, dressed and ready for filming. (Photo: Kevin Ishioka)
Escape vessel in place, dressed and ready for filming. (Photo: Kevin Ishioka)
Filming inside the escape vessel for the scene where Adam Driver’s and Ariana Greenblatt’s characters attempt to launch to escape the destruction of Earth. (Photo: Kevin Ishioka)
“I’m thinking, ‘Oh, that changes everything,’” Scroggins recalls. “There goes all the physics, the engineering and everything. Now we’re going to need to have humans in this thing, and they’re going to be encapsulated while it’s being spun around in all kinds of different directions. Now we’re going to have to have an engineer sign off on this.”
Scroggins ended up building three chairs composed of water-jetted steel, fortified by 10-lb. foam. “We vacuu-formed the parts and put the fiberglass in there and created the molds to hold it up,” Scroggins says. “The original seat was built for the straight up-and-down motion, not for the gyro arrangement. So that one seat you see when Adam Driver is in the cockpit seat and he’s sitting there flying the ship – that’s the seat.”
Art render of the escape vessel section that Scroggins Aviation was contracted to build. (Photo: Kevin Ishioka)
Inside the escape vessel cockpit, dressed and ready for filming. (Photo: Kevin Ishioka)
“We created a bible for the entire build. I knew we were going to have cast members inside the thing, and they’re going to roll it on a rotisserie, so I wanted to make sure that all the different materials we were using would be OK.”
—Doug Scroggins, Founder/CEO, Scroggins Aviation Mockup & Effects
The two additional chairs for the escape vessel were solid as well, but also had to be light enough so they could be placed on a device that could spin and, in Scroggins words, “shake the bejesus out of them.”
The company did indeed end up consulting with an engineering firm to approve the finished product, something that is not necessarily an industry standard. “We created a bible for the entire build,” Scroggins says. “I knew we were going to have cast members inside the thing, and they’re going to roll it on a rotisserie, so I wanted to make sure that all the different materials we were using would be OK.”
Build for the airlock doors, upper inner section, steel and wood inner structure with ABS vacuum-formed panels on the exterior.
Build for the airlock doors, lower split-door assembly, steel and wood inner structure with ABS vacuum-formed panels on the exterior.
Final assembly of the doors added to the set. Three doors were built for the spacecraft. (Photo: Kevin Ishioka)
“We had [the escape vessel] completely covered, and we shopped it under secrecy. We didn’t want anyone to eyeball it or take any pictures.”
—Doug Scroggins, Founder/CEO, Scroggins Aviation Mockup & Effects
The build took place while much of the industry was in lockdown at the height of the COVID pandemic. In true across-the-globe collaborative fashion, Ishioka was coordinating the film’s production design from Japan while Scroggins was in Canada working on another movie, and the bulk of his eight-person team was at Scroggins Aviation’s shop in Las Vegas. The company also operates an overflow facility in Mojave, California.
Airlock door added to the set. (Photo: Kevin Ishioka)
When the escape vessel and seats were finished, they were loaded on to a flatbed truck that transported them to the production base in New Orleans in time for the start of production. “We had it completely covered, and we shopped it under secrecy,” Scroggins reveals. “We didn’t want anyone to eyeball it or take any pictures.”
Three seats were built for the film, one the for the mothership and two for the escape vessel.
The seat in the mothership cockpit. (Photo: Kevin Ishioka)
Doug Scroggins with the escape vessel.
While the escape vessel sequences late in the film showcase the work of Scroggins Aviation most vividly, movie-goers can also see their handiwork within the mothership early in 65. Between screen panels and touch screens, control panels, airlock doors and other cockpit devices that they constructed and supplied, the Scroggins team made sure that no movie-goer would ever accuse this science fiction film of looking low-tech.
After seeing how everything turned out onscreen in 65, Scroggins declared himself both satisfied and hungry for his firm to take another adventure. “The excitement of seeing the outcome just literally made the hairs on my arm start lifting up,” he says. “This was an original film with a good premise, and overall, on the visual effects side, it looked like they nailed it.”
By CHRIS McGOWAN
Images courtesy of eOne, Paramount Pictures and Hasbro, Inc.
The Owlbear is a creature with the body of a bear and the head and feathers of an owl. It was important to find a synergy between the two animals to create something cohesive by blending them seamlessly.
As we approach the 50th anniversary of Dungeons and Dragons in 2024, the most renowned RPG of them all, Paramount Pictures and eOne are releasing a new cinematic interpretation – Dungeons & Dragons: Honor Among Thieves. The movie strives to be true to the original lore and playful spirit of the D&D board game, which was designed by Gary Gygax and Dave Arneson and published in 1974, giving birth to the modern role-playing game industry and gaining tens of millions of fans in subsequent years. D&D’s roots lay in fantasy literature, including the works of J.R.R. Tolkien and miniature war games. The game has inspired novels, video games, podcasts, an animated series from 1983-1985, three live-action feature films from 2000 to 2012, and an upcoming eight-episode series from Paramount and eOne. It has also been referenced everywhere in entertainment from The Big Bang Theory to Stranger Things. It has been published by Wizards of the Coast (now a Hasbro subsidiary) since 1997.
The writers and directors of the movie “were clearly big D&D fans and players, and Wizards of the Coast – the guardians of the D&D universe – were heavily involved both in what happens in the story and in giving my VFX team support and advice as we came up with ideas and looks for the magic,” comments Ben Snow, ILM Production Overall VFX Supervisor.
Simon the Sorcerer (Justice Smith), Edgin the Bard (Chris Pine), the druid Doric (Sophia Lillis) and Holga the Barbarian (Michelle Rodriguez) in Dungeons and Dragons: Honor Among Thieves.
In Dungeons & Dragons: Honor Among Thieves, a charismatic thief and a band of unlikely adventurers attempt to retrieve a lost relic, but things go awry when they run afoul of some sinister characters and ferocious beasts. John Francis Daley and Jonathan Goldstein directed while Michael Gilio and Daley wrote the screenplay. Cast members included Chris Pine (Edgin the Bard), Michelle Rodriguez (Holga the Barbarian), Justice Smith (Simon the Sorcerer), Sophia Lillis (Doric, a Tiefling Druid) and Hugh Grant (Forge Fitzwilliam the Rogue). Legacy Effects took care of many creature practical effects, Ray Chan the production design and Barry Peterson the cinematography. ILM and MPC handled the VFX with help from Crafty Apes, Day For Nite and Clear Angle Studios.
“We sent a plates and environment capture team [to Iceland] to shoot stills and helicopter footage, including a great shot of an active volcano that you see in the film. Our characters and vehicles and sets were added in VFX by MPC using stills and filmed material and then enhanced. We based everything on the real photography and tried to use as much of it intact as possible.”
—Ben Snow, Production Overall VFX Supervisor, ILM
Simon and Edgin ponder their next move. Dungeons and Dragons: Honor Among Thieves was based on the famed Dungeons and Dragons role-playing game created by Gary Gygax and Dave Arneson.
ILM was tasked with creating exotic D&D locations and bringing orcs, an Owlbear, various types of dragons, a Mimic Monster, a Gelatinous Cube and a Displacer Beast to life, among other duties. “Our creatures and magic spells were designed and evolved within the spirit of the D&D world,” says David Dally, ILM Visual Effects Supervisor.
Some days, the visual effects team were totally immersed in the D&D universe, during and after work. “The VFX on-set crew set up a D&D game,” Snow comments. “Charlie Rock, our VFX Coordinator, is an accomplished Dungeonmaster, and we had some enjoyable evenings gaming as a team. It helped us understand and connect, particularly those like me who hadn’t played in a few years.”
Simon examines a hither-thither staff as Doric, Edgin and Holga stand by. The D&D franchise now includes the original board game, novels, video games, podcasts, an animated series and four feature films.
“Dungeons and Dragons: Honor Among Thieves was a great VFX project due to the variety of work,” Snow says. “[It has] a rich universe of creatures, places and magic that the writers and directors were able to draw upon and that we could reference for the visuals. All the fantastical creatures were from the game and other D&D lore, but the directors were able to put their own unique spin on them. The directors had high standards of realism and we tried to shoot as much as possible. When it came to the digital work, they understood why and where we needed to use computer graphics, and our VFX team was given the time to get the passes and references we needed.”
The pandemic had a big impact on the whole production and visual effects processes, according to Snow. For example, the original plan was to shoot the opening in Iceland, but due to COVID and logistics it wasn’t possible to take the actors. Snow explains, “We sent a plates and environment capture team there to shoot stills and helicopter footage, including a great shot of an active volcano that you see in the film. Our characters and vehicles and sets were added in VFX by MPC using stills and filmed material and then enhanced. We based everything on the real photography and tried to use as much of it intact as possible.”
Doric (Sophia Lillis), with horns and pointy ears, is a shape-shifting Tiefling Druid who kept VFX artists busy by transforming into the fearsome Owlbear and other creatures during the course of the film.
In addition to Iceland, filming took place in Northern Ireland in Belfast’s Titanic Studios and on location. “The quarantines affected where we could shoot,” Snow says. In one sequence, Doric escapes from Neverwinter castle. “It was a complex shot that MPC executed with collaboration from ILM. We’d originally planned a section to be shot on location, but the COVID lockdowns made that difficult, so we had a local scanning crew go out and scan the location based on our planning.” A CG version of the shot location was created.
Many practical creatures were made for the film, several of which received significant digital augmentation, according to Todd Vaziri, ILM Compositing Supervisor. “The giant fish creature that the villagers caught in their net, for example, was a giant puppet from Legacy Effects. ILM animated and rendered articulated eyeballs for the fish to give more life to the creature. In addition, ILM compositors warped and articulated the jawline and fish lips of the creature, as well as adding sheeting water glistening off the fish, and subtle splashes and drops of water coming off the creature.”
Holga, Edgin, Simon and Doric contend with a Gelatinous Cube. The cube’s refraction, reflection, fogginess and jiggle were all carefully art-directed.
The Owlbear was the most popular creature among the VFX crew. It was also a tricky creature to nail down in animation, and finding the best blend of owl and bear mannerisms took some experimentation, according to Shawn Kelly, ILM Associate Animation Supervisor. “She’s a creature with the body of a bear and the head and feathers of an owl, so it was important to find a synergy between the two animals to create something that felt like a cohesive whole. We layered owl-like head movements on top of the bear motion and treated her beak and face as a blend between the two creatures. Replacing the typical fur groom for a quadruped with layered and owl feathers was a real challenge.”
“[The Owlbear] is a creature with the body of a bear and the head and feathers of an owl, so it was important to find a synergy between the two animals to create something that felt like a cohesive whole. We layered owl-like head movements on top of the bear motion and treated her beak and face as a blend between the two creatures. Replacing the typical fur groom for a quadruped with layered and owl feathers was a real challenge.”
—Shawn Kelly, Associate Animation Supervisor, ILM
The integration of Owlbear into the photography posed a classic challenge for visual effects. “How do we depict the power and speed of this large, dynamic creature without making the creature feel light and synthetic?” Vaziri recalls. “On the compositing side, we were very careful to add scale cues to help support the massive power of the Owlbear, including the displacement of dirt clods around Owlbear’s paws when she takes massive steps, and subtle dust and particulates when Owlbear went on the attack. The goal was always to make Owlbear feel powerful, dangerous and menacing, never light and floaty.”
Xenk Yendar (Regé-Jean Page), Holga, Edgin, Doric and Simon find themselves in a field of skeletons.
Patrick Gagné, ILM Creature Model Supervisor, comments, “The Owlbear was a great challenge due to her multiple transformations. We needed to make sure the geometry of the base mesh was also suitable for a horse, a humanoid or an owl, for instance. The shape-shifting also needed to be split into parts for the animation department so they could achieve the effect needed. Toes becoming hooves, for example, as well as other face shapes for emotion. All of this was tremendously helped by the lookdev department.”
Snow adds, “The Owlbear was initially the way we first meet the Tiefling Doric and was going to be in one sequence, but we all fell in love with the creature and brought her back later in the film. Dungeons and Dragons has a whole host of interesting creatures from the lore,” says Snow about the Gelatinous Cube, which was both a conceptual and shooting challenge. “Gelatinous cubes have been featured in other movies and TV shows, so we wanted to differentiate ours and make it feel more grounded and believable.”
A Black Dragon wreaks havoc by spewing acid that interacted with both characters and the ground.
The Gold Dragon assumes the form of a sculpture and then comes alive with an awakening spell.
Vaziri continues, “We had to balance the realistic physics of such an object if it existed in real life and the visual storytelling requirements of the scene, which were frequently at odds with one another. Specifically, the refraction, reflection, fogginess and jiggle that would exist in a giant cube of gelatin were all heavily art-directed per shot to make sure the audience could clearly see our characters inside the cube and understand their motivations and strategy on escaping from the cube.”
“The Owlbear was a great challenge due to her multiple transformations. We needed to make sure the geometry of the base mesh was also suitable for a horse, a humanoid or an owl, for instance. The shape-shifting also needed to be split into parts for the animation department so they could achieve the effect needed. Toes becoming hooves for example, as well as other face shapes for emotion. All of this was tremendously helped by the lookdev department.”
—Patrick Gagné, Creature Model Supervisor, ILM
A portly Red Dragon, a popular D&D creature, hopes for a tasty human snack. The special effects team of ILM, MPC, Legacy Effects, Visual Development Artist Wes Burt and Day For Nite helped bring the dragon to life.
The Displacer Beast was another strange threat straight out of D&D lore. Dally describes it as “a big cat-like creature with six legs and tentacle projectors.” He comments, “It was great to work on such an iconic creature. The animators, asset and lighting team did a great job bringing the creature to life with its performance and realism. The comp and FX team worked together developing the projected beast’s disturbance effect. Snow adds, “For the shoot, we had stuntmen in black costumes chasing and interacting with the actors for designing the shots and to give everyone something to react to. We shot references of black fur the art department [had] sourced. The ILM team made the final CG version.”
Straight out of D&D lore, the Displacer Beast is a big cat-like creature with six legs and tentacle projectors. The ILM team made the final CG version.
“We had to balance the realistic physics of such an object [as the cube] if it existed in real life and the visual storytelling requirements of the scene, which were frequently at odds with one another. Specifically, the refraction, reflection, fogginess and jiggle that would exist in a giant cube of gelatin were all heavily art-directed per shot to make sure the audience could clearly see our characters inside the cube and understand their motivations and strategy on escaping from the cube.”
— Todd Vaziri, Compositing Supervisor, ILM
Dally enjoyed the battle sequence of the dueling hands. “The challenge [was] to get both hands to read as more physical and present in the environment, and not to be too magical. The Earthen Hand form is made from its immediate surrounding environment; it would tear up the ground as it moves about and returns the rock and debris to the ground once it’s passed. This was a really exciting challenge, having the hand interact with all of its immediate environment, attracting, forming and collapsing whilst battling the Arcane Hand.”
The giant fish creature caught in the villagers’ net was a giant puppet created by Legacy Effects. ILM animated and rendered articulated eyeballs to give the creature more life.
The Golden Dragon is at first an unassuming sculpture in the courtyard, which comes alive with an awakening spell and battles the heroes, including the Owlbear. Dally explains, “Upon waking, the dragon animation has a staccato/stop-frame quality about it, which is shaken off once it’s fully alive. Throughout the sequence we maintained some of its stone-like rigid qualities, with its scaled stone armor proving a challenge – we had to ensure it didn’t stretch like skin and maintained its stone armor.”
The Black Dragon Rakor was created by MPC and “hewed very closely to the D&D lore version,” Snow says. “He’s mostly featured in a flashback to a battle 100 years before the film. We decided that the fact he spewed acid made him a unique take on the dragon. Special Effects Supervisor Sam Conway’s team came up with an initial look for how the acid would interact with the characters and ground.”
This Mimic has transformed itself into a treasure chest you really don’t want to open. The D&D universe is rich in creatures, places and magic that the film drew from, and the directors were able to add their own unique spin.
To create a spin on the classic Red Dragon from D&D lore, the directors proposed the idea that the dragon be incredibly overweight but still pose a big threat to the team. Snow explains, “For the shoot, the special effects team made a variety of rigs for dragon interaction and large rigs for moving set pieces. Legacy Effects developed the model based on some concepts by Wes Burt [Visual Development Artist]. Our previz team and Day For Nite helped develop the design further. Then MPC added a ton of detail and refinement when they built the asset.”
The portal heist sequence involved stealing a painting containing one side of a portal. Action was shot through both sides of portals in different locations. Snow reveals, “Some of the sets were an elaborate collaboration between the art department, camera department, visual effects, stunts and special effects to allow us to capture the shots. It was crazy, and there was a lot of action during the shoot and up to the point where editorial took the elements and combined them in Avid to show that the plans were working. Everyone breathed a sigh of relief. MPC then took these elements and did some amazing compositing and a fair amount of reprojection, background cleanup and reconstruction to make it all blend together.”
Edgin interacts with a skeletal corpse. Dungeons and Dragons: Honor Among Thieves seeks to continue the playful spirit of the original board game.
The Doric escape sequence was “a big undertaking for a few minutes of film,” according to Snow. “The directors wanted a single shot following Doric through Neverwinter castle, out over the castle battlements into a cottage and through the streets of the city, all the way shape-shifting between different animals and her human form as all hell breaks around her. Day For Nite provided a previz building off work started by The Third Floor, and VFX used that to plan the shoot. We worked with the art department to work out the transition points between the different locations, trying to catch as much in-camera as possible, and using reprojections and blends to provide the background for our actors and CG. The team then created the creatures, worked out the transformations and blended the plates.”
For the Ethereal Plane sequence, it was a challenge to come up with the look of a magical sequence that is grounded foundationally in photorealism. Vaziri explains, “The slow, elegant destruction of the beach environment while Simon [the Sorcerer] wears the helmet had to happen slowly over the course of the sequence. We started by subtly moving sand around, growing grass, disturbing and stretching the rocks and mountains all around Simon and the Wizard. We wanted the water to displace by having orbs of seawater rise and become floating blobs. By the end of the sequence the beach is mostly obliterated, with pebbles and rocks floating and the world mushed together in a symphony of mountains, grass and water. When the wizard turns off the spell and the world becomes real again, it was a blast to collapse all of that distortion back into the real world. It’s a really fun moment, and hopefully it will get some laughs.”
Cast and crew on the set of Dungeons and Dragons: Honor Among Thieves. At far left is Hugh Grant, who plays Forge Fitzwilliam the Rogue.
The production wrapped in August 2021. Snow comments, “Post-production was mostly remote. I would go into ILM to look at shots projected on the big screen each week with one or two of the production team and others remote. I was able to look at shots not just from ILM but also our other vendors MPC and Crafty Apes there – the vendors would send us all the files. We’d do most of our director reviews remotely, but once we started finishing shots I’d fly down to L.A. every couple of weeks to look at shots with them at CO3 [Company 3 post-production facility].”
Vaziri concludes, “It was refreshing to work on a film that has obvious fantastic, otherworldly elements like wizards, magic, creatures and castles, [and keep] it grounded within a recognizable reality that we, the audience, can understand, but also to have fun with it. The movie is so witty and will get a lot of laughs. It’s a really fun film, and it was a thrill to be able to create our visual effects in a movie that has a little twinkle in its eye.”
The impact and benefits of a diverse workforce are immense. We employ artists and a production team from 100 + countries who speak more than 30 languages, and I’m proud that our team has more than 50 percent female leads. In building our team, I’m always looking for exceptionally qualified professionals, whose personal vision aligns with ours and can bring new insights to the work – because the people are our best asset. And every day that we learn something new from one another and expand our worldview, I’m truly inspired.
In my role, I ultimately want to hire the best person for the job, but I also see things through the lens of an African American man… so I created an environment that is diverse and open to new voices. I spend a lot of time talking with students about what I do and how I got here because young people need to see people who look like them and come from the same background. If I can inspire the next generation who didn’t think they could pursue this line of work, I have a responsibility to do that. I believer that cultivating different perspectives allows us to learn from each other culturally and artistically – and benefits the art that we create together.
Being the first woman to be named VFX Supervisor at Industrial Light & Magic was a great privilege and a huge responsibility. I felt the weight of representing all women, all minority women, and the need to be not just good – but excellent. I’ve seen how implicit bias manifests, the looks of disbelief from crew and clients that I could be a supervisor…the unknown quantity that many people find hard to accept, because there are just so few of us. It is harder to get work as a female supervisor, but the obstacles have increased my resolve to do well in this business and share what I’ve learned with those coming up next.
I believe that diversity in VFX is a business imperative. We work globally and our work is consumed globally. Fundamentally, the people who create the work should be as diverse as our consumers. To those who say you have to sacrifice talent for diversity, I say absolutely not. You just need to find the right people – they are definitely out there. We all want to create fantastically beautiful visual effects and work with the most talented people in the world. That’s a given. But, if we could do that and increase the number of women in creative roles, that would be a huge value impact for all of us.
Join us for our series of interactive webinars with visual effects professionals.
As your questions, learn about the industry and glean inspiration for your career path.
Register today at
VESGlobal.org/AMA
By OLIVER WEBB
Images courtesy of Paramount Pictures.
The team at ILM was really methodical about their approach to creating a realistic elephant for the film.
Damien Chazelle’s Babylon is an epic tale of the excessive and extravagant antics of numerous characters in 1920s Hollywood. Industrial Light & Magic provided the bulk of the visual effects for Babylon, with 377 visual effect shots being created for the film. “The creation of that insane world fell into visual effects, and that is where we came in,” says Visual Effects Supervisor Jay Cooper. “We had done another movie for Paramount, which is how we found our way onto this project. We started talking with Damien pre-pandemic.”
“The most important thing was casting an elephant that was quite real. … We built this elephant and cast it from Billy the elephant who is at the L.A. Zoo. We went down there and took some photographs and did some motion studies… We put that in front of [director] Damien [Chazelle] and he gave us notes about trying to make sure that it fit into his movie. … This idea that during this insanely raucous time, one of the gags of this Hollywood party was to bring in an elephant. The elephant is used to create effect when they use it as a distraction during the party scene when they need to sneak an actress out of the back door. That’s the jumping off point in this larger story and helping Damien tell this really large tale.”
—Jay Cooper, Visual Effects Supervisor
Katherine Farrar Bluff was Senior Visual Effects Producer on the project. “I was the Senior Producer on Babylon along with producer Keith Anthony-Brown, and we had a full production team on at our San Francisco studio helping to manage the work,” Bluff says. “As the project qualified for a California film tax credit, the work was all done in our San Francisco studio, whereas we typically end up partnering with our global studios on projects. We did the majority of the VFX work, but we also partnered closely with production’s in-house artist, Johnny Weckworth, who did a ton of work.”
Jimmy Ortega as an Elephant Wrangler.
According to Cooper, one of Chazelle’s biggest concerns was regarding the photoreal quality of the CG creatures. “He didn’t want the audience to be taken back from that in any way. That was his primary concern. He had questions about approach and how to shoot things, but primarily he was reaching out because he was really concerned about getting a good-looking elephant,” he says.
In terms of creative references, Cooper notes that Chazelle had an extensive deck of images that were helpful for evoking time and mood, which he shared with the VFX team, as well as a large list of silent pictures. “Some of those informed some of our design decisions along the way, but primarily the most important thing was casting an elephant that was quite real,” Cooper details. “My feeling was rather than us trying to make an amalgam of different animals that we sourced, we should try to hone in on one thing that we felt that we could really match, and that became our compass for making decisions about texture and lighting and sort of proportion and things like that.”
Diego Calva as Manny Torres.
“We put our heart and souls into making this elephant spectacular. We shot and gathered extensive references. Even the breed of elephant was really important; we had to determine if it was an African or Asian elephant. We were looking all around Northern California for sanctuaries to try and go and shoot the reference. The team was really methodical about how they planned this build-out from the get-go. To see how successful it is in this party sequence, where it fits seamlessly in there, I think that was such an amazing pay off.”
—Katherine Farrar Bluff, Senior Visual Effects Producer
Continues Cooper, “We built this elephant and cast it from Billy the elephant who is at the L.A. Zoo. We went down there and took some photographs and did some motion studies and all those sorts of things. We put that in front of Damien and he gave us notes about trying to make sure that it fit into his movie. Maybe a bit sadder, more juvenile, for example. We took the tusks off it. This idea that during this insanely raucous time, one of the gags of this Hollywood party was to bring in an elephant. The elephant is used to create effect when they use it as a distraction during the party scene when they need to sneak an actress out of the back door. That’s the jumping off point in this larger story and helping Damien tell this really large tale,” Cooper reveals.
Industrial Light & Magic created 377 visual effect shots for the film.
“We put our hearts and souls into making this elephant spectacular,” Bluff add. “We shot and gathered extensive references. Even the breed of elephant was really important; we had to determine if it was an African or Asian elephant. We were looking all around Northern California for sanctuaries to try and go and shoot the reference. The team was really methodical about how they planned this build-out from the get-go. To see how successful it is in this party sequence where it fits seamlessly in there, that was such an amazing pay off.”
700 extras were required for the battle scene, and additional CG fighters were digitally added by ILM.
The crew wanted to shoot in locations that were tied to the story.
“We shot in The Orpheum Theatre [in L.A.] that was built for the City Lights premiere, which I thought was amazing. It’s an amazing theater. We used that for the interior for where they show The Jazz Singer. In that theater we did this really large crane shot where we are stitching together multiple plates for the crowd. Of course, we have to add The Jazz Singer onto the screen. Some of the complicated stitching was sometimes tough, but it was really exciting to be in places that existed at the same time and to be shooting in locations that felt like they were really tied to this story,”
—Jay Cooper, Visual Effects Supervisor
“One of the things we decided early on was that we were going to build a puppet, and the puppet was basically four people inside of a gray cloth with a proxy head for the elephant,” Cooper explains. “That would be the thing that we walked through the hotel that later became the interior location for the Wallach party. Doing that in terms of approach was fantastic because all of the actors have a great understanding in terms of physicality. Damien was able to direct the puppeteers to give some level of performance. It wasn’t everything that you’d expect from an elephant, but at least in terms of scale, position and timing, to get that to work with our camera, and it paid off brilliantly.”
Motion picture magazines from the time period were also an important part of research.
“We also did a lot of really beautiful seamless 2D work and some fantastic matte painting work. Enhancing the Wallach mansion, for example, was a big design process with Damien. He was very particular about what it was going to look like and the kind of a silhouette it had against the sunset sky, but it turned out beautifully and sits really nicely in that sequence. There was lots of de-modernization work that we did throughout that was also really successful,” Bluff says.
Discussing the collaboration with Cinematographer Linus Sandgren, Cooper explains that it was a working relationship on set. “He was asking us questions to make sure that we had what we needed. It was in that vein when he gave us passes and elements when we requested them. Damien didn’t really want to change process for visual effects. Almost as a matter of process, it was our role to fit into a production that was very traditional in its construction. There is no greenscreen or bluescreen work in the strict sense in this movie. At one point, we wrapped the buck with greenscreen just in order to do some matting so we could get our CG elephant to work. There’s no greenscreen shoot per se, no Stagecraft shoot. Damien and Linus wanted to go to real places and locations, and they wanted to have a very grounded and real feeling for the movie.”
The crew shot in The Orpheum Theatre in L.A. that was built for the City Lights premiere.
Another particularly challenging sequence to capture was the battle scene. “There is a massive battle between 700 extras on the day, which we helped fill out with more CG fighters. That was really exciting,” Cooper says. “There was a lot of interesting camera work that we were able to help seam together multiple plates. This was a location in L.A., and there were elements of this environment which we had to clean up. There’s an undercurrent of that kind of thing across the movie, moving things that weren’t period appropriate or got in the way of the story. In this case, we cleaned up the grass and moved things that weren’t meant to be there. There is a spear that flies through the air that was on a cable, and removing the cable and re-creating the tent were required. A lot of it goes to extending and supporting the style that Damien has about long takes and swish pans, and almost crafting the movie like it’s set to music, where there is a rhythm to it that he is very specific about.”
“One of the things we decided early on was that we were going to build a puppet, and the puppet was basically four people inside of a gray cloth with a proxy head for the elephant. That would be the thing that we walked through the hotel that later became the interior location for the Wallach party. Doing that in terms of approach was fantastic because all of the actors have a great understanding in terms of physicality. Damien was able to direct the puppeteers to give some level of performance. It wasn’t everything that you’d expect from an elephant, but at least in terms of scale, position and timing, to get that to work with our camera, and it paid off brilliantly.”
—Jay Cooper, Visual Effects Supervisor
The fictitious Kinoscope Pictures stands in for Paramount Pictures.
Concludes Cooper, “Obviously, L.A. is not what it was in 1928 or 1932. We shot in the Orpheum Theatre that was built for the City Lights premiere, which I thought was amazing. It’s an amazing theater. We used that for the interior for where they show The Jazz Singer. In that theater we did this really large crane shot where we are stitching together multiple plates for the crowd. Of course, we have to add The Jazz Singer onto the screen. Some of the complicated stitching was sometimes tough, but it was really exciting to be in places that existed at the same time and to be shooting in locations that felt like they were really tied to this story.”
By TREVOR HOGG
Images courtesy of Marvel Entertainment and Disney+.
Rough drafts of the facial expressions of Lunella Layfayette.
In the world of animation, Marvel Studios is seems to have had a lot of fun with experimentation, whether it be the multiverse chaos of the What If…? anthology, which introduced zombies into the MCU, or Marvel’s Moon Girl and Devil Dinosaur, based on the comic book by Brandon Montclare, Amy Reeder and Natasha Bustos where a 13-year-old Lunella Layfayette partners with a 10-ton T-Rex from another dimension to battle criminals and supervillains pilfering and threatening her Lower East Side neighborhood in New York City. The Disney+ series, executive produced by Laurence Fishburne, Helen Sugland and Steve Loter, has a pilot that is a double-sized introduction running 44 minutes while the rest of the 16 episodes last 22 minutes each.
Art direction samples for the various environments with color scripts playing a major role in establishing the mood and tone.
“Our Supervising Director, Ben Juwono, with show designers Sean Jimenez, Chris Whittier and Jose Lopez, all got together and were able to pursue something unique and do things they always wanted to do in the animation industry but never had the opportunity to do,” explains Executive Producer Steve Loter, who is originally from Brooklyn. “I was in New York City during the height of the graffiti art scene, Andy Warhol and Jean-Michel Basquiat’s street art; those were a huge inspiration for me. Sean is into all types of art: pop, underground and New York-specific street murals. It is a pen-and-ink-style drawing because we wanted to do something as kinetic as Spider-Man: Into the Spider-Verse; however, by going with something more hand-drawn, and pen and ink with spotted blacks, felt like a different direction to go and that was its own identity.”
“One of the biggest challenges with the show was getting this high quality of craftmanship and drawing and still have it move well. We balanced our animation styles to be immediate in places where we wanted to keep energy up, so we can save time and budget for when we want to get flowing and have lots of in-betweens to say either the action is cool here or we need to do moments where the acting is more high level and the characters are feeling grounded and real. It’s a balance of finding the contrast between those two and peppering them throughout an episode.”
—Kat Kosmala, Animation Supervisor
Hand-drawn effects for the villain of the pilot episode who is able to absorb and discharge electricity.
A successful method to speed through exposition and dialogue in a visual way was by using graphic-design icons and symbols.
Marvel’s Moon Girl and Devil Dinosaur is not trying for realism. “One of the things that I love about these designs is that they have a modern blend where you have structure and anatomy so the characters can turn and move dimensionally, but you also have that mixed with flat graphic elements so you can do pushed expressions and things that go far away from structure,” states Animation Supervisor Kat Kosmala. who worked with the team at Flying Bark Productions. “One of the biggest challenges with the show was getting this high quality of craftmanship and drawing and still have it move well. We balanced our animation styles to be immediate in places where we wanted to keep energy up so we can save time and budget for when we want to get flowing and have lots of in-betweens to say either the action is cool here or we need to do moments where the acting is more high level and the characters are feeling grounded and real. It’s a balance of finding the contrast between those two and peppering them throughout an episode.”
An example of the color palette for a mixed-tape sequence, which is treated differently than the rest of the show.
“[O]ne style is not enough for this show, apparently! We knew that music was going to be an important element to the show early on and that each episode would have a music focus sequence, usually the climax of an episode, an action sequence or something along those lines. It gives animators an opportunity to expand the vocabulary of animation because all of the mixed tapes are so different from each other. Each one is based on the theme and mood of the song it’s trying to display.”
—Steve Loter, Executive Producer
Graphic design icons and symbols like dollar signs or hearts appear in comic book speech bubbles and the goggles of Moon Girl. “The iconography in cartoons has been around since the 1930s when characters would talk and there would be little lines coming out of their mouth to indicate audio. But we’re pulling on a lot of comic book sensibilities bringing those graphics in,” Kosmala notes. “One of the cool things about it is, this show uses the visual medium. It’s not a cartoon where it’s just talking heads and you do all of the story through dialogue. There are so many visual shortcuts. The emojis in the show lets us speed through parts of the story that would take a lot of time with exposition or dialogue to get through and spend more where we want to. It’s a fast-paced show.”
An authentic approach was taken when depicting New York City.
To heighten fights with villains, a different animation style is used to create what is called a “Mixed-Tape Sequence.” “That’s because one style is not enough for this show, apparently!” Loter laughs. “We knew that music was going to be an important element to the show early on and that each episode would have a music focus sequence, usually the climax of an episode, an action sequence or something along those lines. It gives animators an opportunity to expand the vocabulary of animation because all of the mixed tapes are so different from each other. Each one is based on the theme and mood of the song it’s trying to display.” Kosmala loves animating to music. “Music is art and time, and animation is art and time. Neither of these things are static,” Kosmala adds.
Maintaining the desired pacing while keeping Devil Dinosaur feeling heavy and enormous was a tricky balancing act.
As for the visual style, Kosmala observes, “The colors get intense. We drop details so that the characters get more animatable and we can be freer with their movements. The initial challenge was keeping up the energy. At the same time, we have a big dinosaur that has to feel heavy and weighty, so he can’t necessarily pop pose to pose. For a moment that is melodramatic and comedic, we’re going to get simple, superficial and fun with the movements and timing. When characters are dealings with emotions or situations that are heavier, you’re going to have some fallout. We slow the animation down and get more weighted and natural. It’s paralleling what is happening in the story the same way that color palettes signal how you should be feeling.”
“For Devil, we absolutely started with [comic book artist] Jack Kirby and worked to get the design to a place where it was going to be something that was animatable and fit in the whole design aesthetic we wanted to establish. The voice actor for Devil Dinosaur, Fred Tatasciore, wanted lines of dialogue written in the script so his grunts and groans would translate that into something unique and more anchored to the emotion of the scene.”
—Steve Loter, Executive Producer
Emphasis was placed on traditional 2D animation methods, as the goal was to have a visual aesthetic that felt hand-drawn rather than simulated by a computer.
Devil Dinosaur speaks through grunts and groans rather than words. “I did some vocalization for Devil for when he first says his name, and that was a fun thing to do and one of the first things that was animated,” Kosmala remarks. “When you have characters that have to communicate through pantomime, it’s interesting because you have to get creative and expressive with the movement.” A legendary comic book artist was responsible for the original design of the red T-Rex. “For Devil, we absolutely started with Jack Kirby and worked to get the design to a place where it was going to be something that was animatable and fit in the whole design aesthetic we wanted to establish,” Loter states. “The voice actor for Devil Dinosaur, Fred Tatasciore, wanted lines of dialogue written in the script so his grunts and groans would translate that into something unique and more anchored to the emotion of the scene.”
Marvel’s Moon Girl and Devil Dinosaur does not try for realism with its animation style that makes use of a vibrant color palette.
Not every sequence is fast-paced, as the animation slows down for the moments that are supposed to have an emotional weight to them.
Harmony was the primary animation software while the minimal 3D work was done in Maya. “3D was used for vehicles and things that need to be perfectly formed when turning,” Kosmala reveals. “With a lot of 3D and even 2D effects that are generated. you get away from this feeling of it being hand-drawn and handmade – that’s not in the spirit of the show. We tried to stick to traditional methods. There are simple tricks like taking different textures and panning them across each other, and that simplicity is part of what makes the visual effects work charming. It’s not overdone or overworked. It retains that hand-drawn quality so it blends in seamlessly with the rest of the show.” Photographic effects were avoided, Loter points out. “Everything is done in color so things that feel like they’re glowing are just the intensity of a color against another color to create a glow effect.” Kosmala adds, “Because we don’t want that evenness when its generated perfectly via a program. You want it to feel a little imperfect in places.”
It was important to depict the diverse cultures and people who inhabit the Lower East Side of New York City.
“With a lot of 3D and even 2D effects that are generated. you get away from this feeling of it being hand-drawn and handmade – that’s not in the spirit of the show. We tried to stick to traditional methods. There are simple tricks like taking different textures and panning them across each other, and that simplicity is part of what makes the visual effects work charming. It’s not overdone or overworked. It retains that hand-drawn quality so it blends in seamlessly with the rest of the show.”
—Kat Kosmala, Animation Supervisor
The show captures that moment in time when New York still felt like a vibrant artistic place before gentrification happened. “My parents are still there, and I have to go back to New York now and then, so I have to do this right or I’m not going to be able to go back!” Loter chuckles. “A lot of buildings, streets and architecture that you would see on Lower East Side is accurate to real New York and also captures the community. New York is diverse. That was another advantage I had growing up, living in a community that had some many different beliefs, people and tastes. It felt like such an amazing place to be as an artist, to be a part of all of these various cultures.”
New York City is treated as a character. “We have a big meeting room that has a lineup of all our incidentals [which numbers around 70],” Kosmala states. “It’s a bunch of people who could be used to populate any scene. It’s so heartening to see all different ages, sizes, colors and body types. Everybody is represented. It’s an emotional thing to look at.”
By CHRIS McGOWAN
Guardians of the Galaxy, Vol. 3. VFX: RISE Visual Effects Studios, Framestore, Rodeo FX, Crafty Apes, Wētā FX, Gentle Giant Studios, ILM, Weta Digital and Clear Angle Studios. SFX: Marvel Studios and Legacy Effects. (Image courtesy of Marvel Studios)
Over the last few years, the VFX industry has surged due to an infusion of visual effects in almost all films and series, the expansion of the streamers, a boom in animation, and the growth of video games and immersive formats. This is happening while LED stages and virtual production have been altering filmmaking and post-production processes, bringing them closer together.
At the same time, VFX work has continued to expand across the planet. “The globalization of the VFX business has been happening for a while, but the opportunities for remote working that have been accelerated by the pandemic have been a great enabler of this trend with artists and teams able to collaborate ever more easily and effectively across geographies,” says Namit Malhotra, Chairman and CEO of DNEG.
Streaming platforms Disney+, Apple TV+, HBO Max, Peacock and Paramount+ launched between 2019 and 2021, joining Netflix and Amazon Prime, collectively boosting production. “The strong growth in demand for content driven largely by the streaming companies has opened new avenues in content creation for VFX and animation companies,” Malhotra says. However, he feels the accelerated recent growth of the industry is now becoming more balanced. “The number of productions globally is now being rationalized to the reality of what can be produced. Demand was outstripping supply to such an extent that it was actually becoming unsustainable. What we are seeing now is more of a sensible and sustainable approach to content creation, and it is finding equilibrium – which is a good thing. There is still growth, but it is a lot more structured and sustainable.”
Because of the streamers, “there is a lot more episodic content than there was five years ago,” comments Pixomondo CEO Jonny Slow. “This is not a new factor, and the growth of it may slow down a little in the short term, but I don’t see this trend going away. This is a whole new sub-genre of content. In publishing terms, it’s like the invention of the novel, and it has created millions more viewing hours per week.”
The streamers have generated plenty of filmmaking work in domestic production centers across Europe and Asia, from Oslo to Seoul, which in turn has generated more VFX work around the world. India has become an especially important pole of visual effects with many foreign-owned and locally-owned VFX houses working at full throttle there.
Vantage Market Research forecasts that “the increased demand for advanced quality content among consumers across the globe and the introductions of new technologies related to VFX market by industry players are expected to augment the growth of the VFX market,” and predicts that VFX global market revenue will climb from $26.3 billion in 2021 to $48.9 billion in 2028, growing at a compound annual growth rate (CAGR) of 10.9% during the forecast period.
“VFX is now an integral component of cinematic narrative in film, episodic, commercials and themed entertainment. Due in part to the convergence of gaming workflows, GPU-accelerated computing functions and cloud computing, VFX is increasingly accessible to all levels of complexity and budgets in storytelling,” says Shish Aikat, Global Head of Training at DNEG.
ACQUISITIONS AND EXPANSIONS
The dynamic activity of the VFX business in the last year includes acquisitions and foundings. One of the biggest deals in 2022 was Sony’s purchase of Pixomondo, which has facilities in Toronto, Vancouver, Montréal, London, Frankfurt, Stuttgart and Los Angeles. Recent projects include: Avatar: The Last Airbender (Netflix) and the next seasons of House of the Dragon, The Boys, Halo, Star Trek: Discovery, Star Trek: Strange New Worlds and many others. About the sale, Slow comments, “It allows us to benefit from being fully aligned with the whole Sony group, both creatively and from a technology development perspective.”
Also last year, Crafty Apes acquired Molecule VFX. The Fuse Group (owner of FuseFX) bought El Ranchito, which has studios in Madrid and Barcelona. Outpost VFX and Framestore opened Mumbai facilities and BOT VFX a Pune branch. After purchasing Scanline VFX at the end of 2021, Netflix acquired Animal Logic in 2022 and signed a multiyear deal with DNEG through 2025 for $350 million. DNEG will open an office in Sydney this year to go with its existing facilities in London, Toronto, Vancouver, Los Angeles, Montréal, Chennai, Mohali, Bangalore and Mumbai.
INDIA
DNEG’s four Indian studios played a role in how India has become a significant source of global VFX production. MPC and The Mill (owned by Technicolor Creative Services), FOLKS VFX (The Fuse Group), Framestore, BOT VFX (based in Atlanta and with three studios in India), Rotomaker India Pvt Ltd, Mackevision, Outpost VFX and Tau Films are other multinationals with facilities in India.
65. VFX: Framestore, Method Studios, New Holland Creative, Quantum Creation FX, Captured Dimensions and 22DOGS. Visualization: OPSIS. (Image courtesy of Columbia Pictures/Sony)
Spider-Man: Across the Spider-Verse. (Image courtesy of Marvel and Columbia Pictures/Sony)
Mission: Impossible – Dead Reckoning – Part One. VFX: ILM, Rodeo FX, BlueBolt and Clear Angle Studios. Visualization: Halon Entertainment. (Image courtesy of Paramount Pictures)
Mission: Impossible – Dead Reckoning – Part One. (Image courtesy of Paramount Pictures)
TOP TO BOTTOM: Ant-Man and the Wasp: Quantumania. VFX: ILM, ILM/StageCraft, Digital Domain, Spin VFX, Method Studios, MPC, Luma Pictures, Barnstorm VFX, Sony Pictures Imageworks, MARZ, Territory Studio, Rising Sun Pictures, Perception, Folks VFX and Clear Angle Studios. Visualization: The Third Floor. (Image courtesy of Marvel Studios)
Ant-Man and the Wasp: Quantumania. (Image courtesy of Marvel Studios)
Beau Is Afraid. VFX: Hybride and Folks VFX. (Image courtesy of A24)
Beau Is Afraid. (Image courtesy of A24)
One of India’s leading local visual effects firms is FutureWorks, which has 325 total employees in facilities in Mumbai, Hyderabad and Chennai. CEO Gauray Gupta comments, “Of these, our Chennai studio is geared as a global delivery center to service our international clients. Our Mumbai studio, which was our first one, is focused on Indian filmmakers, and also works closely with platforms like Amazon Prime Video and Netflix for their Indian productions. Early next year will see us relocate to a larger studio in Mumbai and expand our Hyderabad operations with a bigger facility in Q2.” He notes that FutureWorks’ recent portfolio “spans global hits including: The Peripheral for Prime Video, Westworld for HBO, Netflix’s Lost in Space and [the Hindi-language movies] Jaadugar, directed by Sameer Saxena, and Darlings, directed by Jasmeet K. Reen.”
FutureWorks currently has “around a 50% split between our domestic and international customers, and our business strategy is to continue along those lines as we grow,” according to Gupta. “Global demand for VFX services has fueled the rapid increase in VFX studios in India. Indian studios are now full of creative sequences and shots, not just RPM or back-office work.”
Gupta notes, “Global demand and supply have increased concurrently, and there is plenty of room for everyone. What we see is a truly global marketplace, [with] more choices for clients in terms of where and who can execute the top-end work. However, [having] more studios also means that the industry needs more talent, and that talent has a wider range of options than ever before.”
Gupta adds, “This is the most excited I’ve been about the industry in India since I founded the company. There is a huge demand for content from OTT networks and filmmakers. Relationships are global, technology is global, vendors are global. The scene here currently is creative, ambitious and evolving at a rapid pace.”
ACROSS THE PLANET
Ghost VFX is opening a studio in Pune in May and has facilities in Los Angeles, Vancouver, Toronto, London, Manchester and Copenhagen, with nearly 600 total global employees (Streamland Media purchased Ghost VFX in 2020). “Having studios across the globe means we’re able to work together across a single technology workflow so we can react to the ebbs and flows of demand,” says Patrick Davenport, President of Ghost VFX. “We’re also in several key locations for tax incentives. Although we offer our employees the option of working from home, hybrid or in-studio, having global studios helps us retain talent who want to work in different countries as well as [work] in-studio.”
Davenport adds, “We have larger projects which we share across the studios, but still focus on being able to support local productions. For example, our Copenhagen studio just worked on Troll, a Norwegian film for Netflix. On a global scale, we’ve worked on several projects including: Star Trek: Strange New Worlds that artists in Copenhagen and Vancouver worked on, and for Fast X we currently have teams in our U.K., Copenhagen, Pune and L.A. studios working on the film.” Another is the new season of The Mandalorian, “one of several projects we’re working on for Lucasfilm.”
Glassworks has studios in London, Amsterdam and Barcelona. “Speaking as a studio with multiple locations across Europe, I can definitely see [having them as an] advantage. The benefits come in different aspects, including a shared pool of resources, access to specialized talent across offices, and opportunities within each market or in tandem across facilities,” says Glassworks COO Chris Kiser. “Scalability is always important in our business, and it’s great to be able to work with artists and producers that you know and trust before needing to go outside of your own studio.”
For Glassworks, “The year started with a couple of big commercial projects, including the Turkish Airlines Super Bowl spot featuring Morgan Freeman and Apple’s Escape the Office film,” Kiser says. “Young adult and fantasy fans will have seen VFX from our team in both Vampire Academy and Fate: The Winx Saga, [and] we have other projects in the works for Netflix, HBO and Amazon Prime.”
VIRTUAL PRODUCTION IMPACT
Virtual production has greatly transformed the VFX business and inspired the construction of hundreds of LED stages, both fixed-location and bespoke/pop-up. Last year ended with the completion of two notable facilities in Culver City. Amazon’s stage, located on Stage 15 of the Culver Studios lot, has an 80-foot diameter with a 26-foot-high volume, a virtual-production takeover of what had been the production scene of many famous movies in the analog era. Nearby, a new LED stage rose at Sony Innovation Studios on the Sony lot, in the same year that the firm purchased Pixomondo and its three LED stages.
Virtual production was valued at $1.46 billion in 2020, projected to reach $4.73 billion by 2028 and expected to grow at a CAGR of 15.9% during the forecast period from 2021 to 2028, according to a market report by Statista.
“Things are normalizing a little now, but along the way virtual production became a more widely adopted production solution, and for a few players, including Pixomondo, there is no turning back from this as we have created some very effective tools. That said, we see it as very complementary to our VFX services business, not a replacement – and in fact, we are able to integrate the processes to deliver additional value and speed of delivery,” Slow says.
INCENTIVES MAKE A DIFFERENCE
Tax breaks are still having an impact on the geography of VFX. “Incentives do create additional production spend overall, as they directly impact how far a production budget will go,” Slow says. “However, not all of the benefit stays in VFX ultimately – our clients have to balance productions books overall somehow. But, a more generous scheme in one region will influence where our clients want the work to happen, so a small change in the rules can create a very big shift in demand for work in a particular region. This has been very effectively used as a tool to drive investment and jobs into Canada, the U.K., Germany and many other places. It’s a big success story, and I think we will see this continue to evolve.”
Malhotra notes, “Frankly, these incentives make the use of visual effects more competitive for our clients, allowing them to create higher-quality content. This is important all round, as it creates more sustainable employment, as well as great quality of work for our clients, while helping them mitigate the costs of content creation.”
Transformers: Rise of the Beasts. VFX: MPC, ILM and Wētā FX. Visualization: Halon Entertainment and The Third Floor. (Image courtesy of Paramount Pictures)
John Wick: Chapter 4. VFX: Rodeo FX, Crafty Apes, Mavericks VFX, One of Us, The Yard VFX, Tryptyc VFX, Light VFX, Pixomondo, Outlanders VFX, Boxel Studio, Atomic Arts, McCartney Studios, Track VFX, WeFX and Clear Angle Studios. Visualization: NVIZ and
(Image courtesy of Lionsgate)
Indiana Jones and the Dial of Destiny. VFX: ILM, Clear Angle Studios, Important Looking Pirates, Rising Sun Pictures, Crafty Apes, The Yard VFX, Soho VFX, Midas VFX and Capital T. Visualization: Proof Inc. (Image courtesy of Paramount Pictures)
Renfield. VFX: Crafty Apes, ILM, Outpost VFX, Connect VFX, Pixel Magic, Weta Workshop and Spectrum Effects. Visualization: Proof Inc. (Image courtesy of Universal Pictures)
Dungeons & Dragons: Honor Among Thieves. VFX: ILM, MPC and Clear Angle Studios. SFX: Legacy Effects. Visualization: Day For Nite. (Image courtesy of Paramount Pictures)
Shazam! Fury of the Gods. VFX: DNEG, Pixomondo, Clear Angle Studios, Wētā FX, Scanline VFX, Method Studios. Weta Digital, BOT VFX and RISE Visual Effects Studios. Visualization/Effects: OPSIS and The Third Floor.
(Image courtesy of Warner Bros.)
Servant. Series VFX: PowerHouse VFX, Cadence Effects, Ingenuity Studios and Vitality Visual Effects.
(Image courtesy of Apple TV+)
Slumberland. VFX: DNEG, Scanline VFX, Outpost VFX, Rodeo FX, Important Looking Pirates, Ghost VFX, BOT VFX, MARZ and Incessant Rain Studios. Visualization: Halon Entertainment. (Image courtesy of Netflix)
ARTIST CHALLENGES AND OPPORTUNITIES
Davenport comments, “Demand for VFX looks likely to continue, though there is still the challenge of delivering the work within budgets and compressed schedules at a time of rising costs, particularly of labor.”
Malhotra observes that the talent gap is another challenge. “We need more talent in our industry with more experience – not just to creatively deliver projects, but also to manage and produce them,” he comments. “Training is a key focus – the fact that everyone is working from home compromises the culture of learning from your team and those around you, where you gain more experience by asking questions and looking at each other’s work. This has an effect on the time it takes to bring new recruits in our industry up to speed, which poses some interesting challenges for us as an industry — it’s a universal issue.”
Kiser adds, “The biggest challenge to our industry is bringing in the next generation and providing them with training and opportunities to succeed. Many of us were able to get a break somewhere or discover the potential for a career in VFX thanks to technical training or personal connections. We need to take advantage of the momentum and interest in film and TV to reach a wider, more diverse group of young people.”
“The convergence of visual effects and real-time gaming technologies, and the emergence of opportunities in the metaverse, virtual reality, immersive experiences and web 3.0, all significantly contribute to the possibilities for visual effects artists to leverage their skill sets beyond movies and television. It is a very interesting time for our industry and the people that work within it,” Malhotra says.
“We certainly hope the trend for VFX and animation will continue as it has been thrilling and made for some amazing content. The industry and budgets are likely to fluctuate in the same way they have over the years, although the demand has never been higher,” Kiser says. “The key now, as it has always been, is retaining talent and fostering creativity within our teams. We can’t control what happens in the outside world, but we have the ability to build a productive and enjoyable environment where the best creative work happens.”
“[In the] short term,” Slow concludes, “activity levels are calming down a little, but this is a good thing for the industry and the people working in it. Long-term, I don’t think there is any change to the overall trend of continued, healthy, year-on-year growth.”
By TREVOR HOGG
One of the leading artists incorporating AI into his creative process is Refik Anadol who created a sculpture inspired by high-frequency radar data collections called Bosphorus. (Image courtesy of Refik Anadol)
Much has been made lately of the proliferation of artificial intelligence within the realm of art and filmmaking, whether it be AI entrepreneur Aaron Kemmer using OpenAI’s chatbot ChatGPT to generate a script, create a shot list and direct a film within a weekend, or Jason Allen combining Midjourney with AI Gigapixel to produce “Théâtre D’opéra Spatial,” which won the digital category at the Colorado State Fair. Protests have shown up on art platform ArtStation consisting of a red circle and line going through the letters AI and declaring ‘No to AI Generated Images,’ while U.K. publisher 3dtotal posted a statement on its website declaring, “3dtotal has four fundamental goals. One of them is to support and help the artistic community, so we cannot support AI art tools as we feel they hurt this community.”
“There are some ethical considerations mainly about who owns data,” notes Jacquelyn Ford Morie, Founder and Chief Scientist at All These Worlds LLC. “If you put it out on the web, is it up for grabs for scrubbers to come and grab those images for machine learning? Machine learning doesn’t work unless you have millions of images or examples. But we are out at an inflection point with the Internet where there are millions of things out there and we have never put walls around it. We have created this beast and only now are we getting pushback about, ‘I put it out to share but didn’t expect anyone would just grab it.’”
OPPOSITE TOP LEFT: The iconic photo of Buzz Aldrin on the moon, taken by Neil Armstrong on the 1969 Apollo 11 mission, reimagined in the style of Van Gogh’s Starry Night. This is one of the earliest artworks ever made with NightCafé Creator, and still one of the best. (Image courtesy of Night Studio Café)
‘In the style of’ text prompts are making artists feel uneasy about their work being replicated through an algorithm as in the case of Polish digital artist Greg Rutkowski, who is one of most commonly used prompts for open-source AI art generator Stable Diffusion. “I just feel like at this point it’s unstoppable, and the biggest issue with AI is the fact that artists don’t have control of whether or not their artwork get used to train the AI diffusion model,” remarks Alex Nice, who was a concept illustrator on Black Adam and Obi-Wan Kenobi. “In order for the AI to create its imagery, it has to leverage other artist’s collective ‘energy’ [and] years of training and dedication to a craft. Without those real artists, AI models wouldn’t have anything to produce. I believe AI art will never get the artistic appreciation that real human-made art gets. This is the fundamental difference people need to understand. Artists create things, and hacks looking for a shortcut only know how to ‘generate content.’”
Rather than rely on the Internet, AI artists like Sougwen Chung are training robotic assistants on their own artwork and drawing alongside them, which follows in the footsteps of another collaboration that lasted 40 years and produced the first generation of computer-generated art. “There was some interesting stuff going on there that nobody knows about in the history of AI and art,” observes Morie. “Harold Cohen and the program AARON, which was an automatic program that could draw with a big plotter that learned as it drew and made these huge, beautiful drawings that were complex and figurative, not abstract at all.” AI is seen as essential element for developing future tools for artists. “Flame’s new AI-based tools, in conjunction with other Flame tools, help artists achieve faster results within compositing and color-grading workflows,” remarks Steve McNeill, Director of Engineering at Autodesk. “Looking forward, we see the potential for a wider application of AI-driven tools to enable better quality and more workflow efficiencies.”
Trevor Hogg experimenting with DreamStudio by Stability AI to create a futuristic environment by using prompts such as steampunk city block, nighttime, dance club, wet snow falling, in the style of Syd Mead and Ralph McQuarrie. (Image courtesy of Trevor Hogg)
Machine learning is seen as an essential element by Autodesk for developing future tools for artists and has been incorporated into Flame. (Image courtesy of Autodesk)
The following prompts were entered into AI Image Generator API by DeepAI: An angel with black wings in a white dress holding her arms out, a digital rendering, by Todd Lockwood, figurative art, annabeth chase, at an angle, canva, global radiant light, dark angel, 3D (Image courtesy of DeepAI)
The following prompts were entered into AI Image Generator API by DeepAI: A girl in a white dress and a blue helmet, a detailed painting, by wlop, fantasy art, paint tool sai!! blue, [mystic, nier, detailed face of an asian girl, blueish moonlight, chloe price, female with long black hair, artificial intelligence princess, anime, soft lighting, old internet art. (Image courtesy of DeepAI)
Does the same reasoning apply to the creation of text-to-image programs such as AI Image Generator API by DeepAI? “When we saw the research coming out of the deep learning community around 2016-2017, we knew practically every part of our lives would be changed sooner or later,” notes Kevin Baragona, Founder of DeepAI. “This was because we saw simultaneous AI progress in very disparate fields such as text processing, gameplaying, computer vision and AI art. The same basic technology [neural networks] was solving a whole bunch of terribly difficult problems all at once. We knew it was a true revolution! At the time, the best AI was locked away in research labs and the general public didn’t have access to it. We wanted to develop the technology to bring magic into our daily lives, and to do it ethically. We brought generative AI to the public in 2017. We knew that AI progress would be rapid, but we were shocked at how rapid it turned out to be, especially starting around 2020.” Baragona sees AI as having positive rather than negative impact on the artistic community. “We’ve seen that text-to-image generators are practically production-ready today for concept art. Every day, I’m excited by the quality of art that takes a couple seconds to produce. Visual effects will continue to get more creative, more detailed and much cheaper to produce. Basically, this means we’ll have vastly more visual effects and art, and the true artists will be able to create superhuman art with the aid of these computer tools. It’s a revolution on par with the rise of CGI in the 1990s.”
Undoubtedly, there are legal issues as to who owns the output and whether the original sources should be given credit. “As the core building blocks of new AI generative models continue to mature, a new set of questions will arise, like it has happened with many other transformative technologies in the past,” notes Cristóbal Valenzuela, Co-Founder and CEO at Runway. “Ownership of content created in Runway is owned by users and their teams. Models can also be retrained and customized for specific use cases. We are also building together a community of artists and creators that inform how we make product decisions to better serve those community needs and questions.” The AI revolution is not to be feared. “There are always questions that emerge with the rise of a new technology that challenges the status quo.” Valenzuela observes. “The benefits of unlocking creativity by using natural language as the main engine are vast. We will see so many new people able to express themselves through various multimodal systems, and we’ll see previously complicated arenas like 3D, video, audio and image be more accessible mediums. Tools are only as good as the artist who wields them, and having more artists is ultimately an incredible benefit to the industry.”
Should AI art be the final public result? “I think that the keyword prompts used should also be displayed along with it, and any prompt that directly references a notable piece of existing art or artist should require a licensing deal with that referenced artist,” remarks Joe Sill, Founder and Director at Impossible Objects. “For instance, if an AI art is displayed that has utilized the prompt of ‘a mountaintop with a dozen tiny red houses in the style of Gregory Crewdson.’ you’re likely going to need to have that artist’s involvement and sign-off in order for that art to be displayed as a final result. If AI art is simply used in a pipeline as a starting point to inspire an artist with references, I think it’d be great for the programs themselves to start being listed in credits. Apps like Midjourney or DALL·E being credited as the key programs that help artists develop early inspiration only helps with transparency and also accessibility. If an artist releases a piece of work that was influenced by AI art, they can credit the programs used like, ‘Software used: DALL·E, Adobe Photoshop, Sketchpad.’”
AI has given rise to a new technological skill in the form of “the person who can write a compelling prompt for a program like DALL·E 2 to extract a compelling image,” states David Bloom, Founder and Consultant at Words & Deeds Media. “To some extent it’s a different version of what artists have always faced. If you are a musician, you had to learn how to play an instrument to able to reproduce the things that you were hearing in your head. I remember George Lucas saying in the 1990s when they put out a redone version of Star Wars, ‘I’m never going to show the original version again because the technology now allows me to create a film that matches what I saw in my head.’ It’s just like that. The technology is going to allow new kinds of people to see something that they have in their head and get it out without necessarily having the same or any technical skills, if they can articulate it.”
Part of the fear of AI comes from misunderstanding. “It’s not as powerful as people think it is,” notes Jim Tierney, Co-Founder and Chief Executive Anarchist at Digital Anarchy. “It’s certainly useful, but in the context how we use AI, which is for speech-to- text, if you have well-recorded audio and someone who speaks well, it’s fantastic but falls off the cliff quickly as the audio degrades. There is a lot fear around AI. We were supposed to have replicants by now! Blade Runner was set in 2019. Come on. Where are my flying cars?” As for the matter of licensing rights, Tierney references what creatively drives artists in the first place. “If you say, ‘Brisbane, California, at night like Vincent van Gogh would have done,’ that’s going to create something probably Starry Night-ish. But how is that different from me painting it using that visual reference and an art book? It’s complicated. I spent a bunch of time messing around with Midjourney. If you go in there looking for something specific and say, ‘I want X. Create this for me.’ You will have to go through many iterations. People can make some cool stuff with it, but it seems rather random.”
What is affecting the quality of AI art is not the technology. “People have this idea that you type in three simple words and get some sort of masterpiece,” observes Cassandra Hood, Social Media Manager at NightCafé Studio. “That’s not the case. A lot of work goes into it. If you are planning on receiving a finished image that you are picturing in your mind, you’re going to have put the work into finding the right words. There is a lot of experimenting that goes with it. It’s a lot harder than it seems. That applies to many people who think it’s not as advanced or not too good right now. It can be good if you put the work in and actually practice your prompts. We personally don’t create the algorithms, but give you a platform and an easy-to-use interface for beginners who move onto the bigger more complicated code notebooks once they graduate from NightCafé. We are focusing on the community aspect of things and making sure to provide that safe environment for AI artists to hang out and talk about their art. We have done that on the site by adding comments and contests.”
This particular Refik Anadol image was commissioned by the city of Fort Worth, Texas. (Image courtesy of Refik Anadol)
An interactive art exhibit created by Refik Anadol that utilizes AI. (Image courtesy of Refik Anadol)
Refik Anadol created projections to accompany the Los Angeles Philharmonic Orchestra performing Schumann’s Das Paradies und die Peri. (Image courtesy of Refik Anadol)
“If you put it out on the web, is it up for grabs for scrubbers to come and grab those images for machine learning? Machine learning doesn’t work unless you have millions of images or examples. But we are out at an inflection point with the Internet where there are millions of stuff out there and we have never put walls around it. We have created this beast and only now are we getting pushback about, ‘I put it out to share but didn’t expect anyone would just grab it.’”
—Jacquelyn Ford Morie, Founder and Chief Scientist, All These Worlds LLC
Example of completely AI generated character concept art by Joe Sill of Impossible Objects, created through Midjourney’s new V4 update which “makes people look like people.” (Images courtesy of Impossible Objects)
“My opinion on this AI revolution has changed,” acknowledges Philippe Gaulier, Art Director at Framestore. “It has probably been a year or year and a half ago that AI has really exploded in the concept art world. At the beginning I thought, ‘Oh, my god, our profession is dead.’ But then I realized it’s not because I saw the limits of AI. There is one factor that we shouldn’t forget, which is the human contact. Clients will never stop wanting to deal with human beings to produce some work for whatever film or project that they have. However, there will be less of us to produce the same amount of work in the same way when tools in any industry evolve to become more efficient. The tools haven’t replaced people because people are still needed to supervise and run them because machines don’t communicate like we do. But there has been a reduction in the number of people for any given task. I have been in this industry long enough to understand that things evolve all of the time. I have already started playing around with AI for references. I’m not asking myself whether it’s good or bad. I’ve accepted the idea that is going to be part of the creative process because human beings in general like shortcuts.”
In the middle of the AI revolution is Stable Diffusion, which was created by researchers at Ludwig-Maximilians University of Munich and Runway ML, and supported by a compute grant from Stability AI. The release of the free, open-source neural network for generating photorealistic and artistic images based on text-to-images was such a resounding success that Stability AI was able to raise $101 million in funding for its open-source AI research, which involves other types of diffusion models for music, video and medical research. “A research team led by Robin Rombach and Patrick Esser was looking at ways to communicate with computers and did different experiments looking at text-to image,” remarks Bill Cusick, Creative Director for DreamStudio, run by Stability AI. “Their goal was to get to a place where instead of being limited by your physical abilities you could be able to translate your ideas into images. It has evolved in a way where now we can see what is possible, and there is a bifurcation of what the approaches are. Stable Diffusion and DreamStudio are tools. DreamStudio gives you settings and parameters to control image generation. I had a meeting with an indie studio creating a workflow using Stable Diffusion, and it’s as complex as a Hollywood workflow would be, and the results are incredible. There are also people authoring Blender and Unreal Engine plug-ins, and I’ve reached out to as many community devs as possible to help accelerate their development, and I hope more folks get in touch.”
Stability AI Creative Director Bill Cusick experimenting with the possibilities of AI art. (Images courtesy of Stability AI)
Cusick adds, “AI is never going to whole cloth recreate someone’s picture. By the time this article comes out, there is going to be text-to-video, and the question of did my art get stuck into a dataset of billions of images is meaningless when the output is animation that is unique and moving in a totally different way than a single image. I agree with all of the problems with single images and the value of labor. But it’s momentary. We are moving towards a procedurally generated future where there is a whole other method of filmmaking coming.”
“I want concept artists to treat [AI] as a tool because it’s going to be more powerful in their hands than anybody else’s.”
—Bill Cusick, Creative Director, Stability AI
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.