The films of James Cameron are known for many attributes that pop up time and again, but one of the most undeniable commonalities is the simple fact that a James Cameron film evolves the use of technology in filmmaking. From making one of cinema’s first fully CGI characters in Terminator 2: Judgment Day to innovating the “virtual production stage” for Avatar, James Cameron has always found ways to push the technological boundaries of filmmaking. Unsurprisingly, Avatar: The Way of Water is no different.
Inverse visited Cameron’s Lightstorm Entertainment to get a window into the upcoming digital release of Avatar: The Way of Water, including a sneak peek at some of the release’s three hours of special features alongside tours of their cutting edge production facilities.
Here are the coolest things we learned about the stunning VFX on Avatar: The Way of Water.
1. Pandora was Built in the Real World Before It Was Digitized
One key difference between The Way of Water and its predecessor is a far more thorough integration of live-action performance and the VFX-created world. As a consequence, producer Jon Landau explained, “If you compare the first Avatar to The Way of Water, The Way of Water has many more humans interacting with CG elements, and we had to bring it up to the standard.”
The team’s solution involved the elaborate real-world construction of every object that a live-action human interacts with throughout the course of the film. “If somebody has a prop that they're handing to somebody, it's a Na’vi prop, we need to build that prop,” Landau explained, “because once it goes into the human's hands, it needs to be real.” That was only the beginning of the specific challenges the team faced with Avatar: The Way of Water.
2. The Performance-Capture Close-Up Was Perfected
Another area where the sequel made advancements over the original is in performance-capture. At the event, Landau explained that “movies are about close-ups, if you don't feel empathy in a close-up, it doesn't matter what you put up in the world, what action you have.” The team worked to achieve this “by coupling the capture [of] the face and the body at the same time,” with the face captured with greater nuance than ever before.
As detailed in the “Acting In The Volume” featurette (which we viewed at Lightstorm), the sequel uses much more advanced technology for performance capture and digitization than the 2009 original. As Jon Landau detailed, the original utilized a single standard camera to record performers’ faces, while the sequel uses a pair of HD cameras to capture “a much higher fidelity of performance.”
3. The Creatures Were a Combination of Puppets and Digital Tech
A personal favorite element for this writer was to see how the team performed the various creatures of Pandora in a combination of puppetry with new digital technology. The team built variably sized puppets, designed from the ground up with internal joint hierarchies such that they can either be animated accurately or interacted with realistically by performers. This became the basis for performer-creature interaction, such as the Ilu-riding you see in the final movie.
We watched BTS footage of performers riding puppets through the massive tank, giving realistic movement patterns for the animators to ingest and digitally build out from at Lightstorm before finishing at Weta Workshop. The extensive process was like an interesting marriage of the kind of massive puppetry that brought the Xenomorph Queen to life and the digital-capture technology we expect from the Avatar franchise.
4. Ultraviolet Was Used to Shoot Volume Underwater
The existing performance capture technology used in Avatar doesn’t work in the water, so, in Cameron’s words, “infrared doesn't go through water, we had to create a completely new system, so we decided to use ultraviolet because ultraviolet will penetrate through water quite far.” “Nobody had ever done performance-capture in water before,” explained Landau, provoking a new set of challenges for the team, including an unlikely foe: air bubbles.
As Cameron explained, each bubble is a “little wigging mirror” reflecting the performers’ markers, so the process involved the entire cast learning to free dive, breathe above the surface, and perform below. Markers also reflected off the water’s surface, so to solve this technical challenge, Cameron and crew took inspiration from the thousands of black plastic beads they used in The Abyss, using hollow, translucent balls to separate the water from the air while allowing light in but keeping the Volume technology focused on the performances below.
5. Kabuku Inspired the Real-Time Performance Capture
The event ended with an in-person demonstration of the software used to translate the underwater and above-water footage in, effectively, real-time. The two kinds of Volume capture would then be integrated, Cameron said — computers would take data “from one Volume, data from the other Volume, and in real time, and a tiny fraction of a second, it's integrating all that information together.”
The footage and data are collected, along with the performers’ movements and subtle reactions, almost instantaneously ingested into Lightstorm’s powerful computers and 3D-modeled in real time. The 3D environment enables the animators to be able to change angles, add and manipulate environments, and even alter cinematography elements like exposure, angles, and lens choice. It creates what Landau called a “template” of the scene that they turn over to Weta for the final, full VFX work.
The process also involves what, with what they call a ‘Kabuki,” which paid “homage to the Japanese art form, where we project onto it the lips and mouth of the actors” so Cameron and their editors can work with and deliver the final template to Weta.
The end result netted Avatar: The Way of Water an Academy Award for its VFX innovations, and the rest is filmmaking history.