by James Mathers
Cinematographer and Founder of the Digital Cinema Society
(Excerpted from the August 2019 Digital Cinema Society eNewsletter)
I try to never miss covering the annual SIGGRAPH convention, where I get a sneak peek at where computer graphics technologies are heading. Such innovations, many seen for the first time at this important convention, will shape not only the Entertainment Industry, but our lives in general, with potentially positive and negative implications.
The area of most interest for me is a broad range of computer-aided filmmaking methods known as Virtual Production. Growing out of the gaming industry, which requires accelerated graphics processing to not only create, but to play games in real time, these tools enhance creativity and save time compared to traditional linear pipelines. The mass market demand created by gamers allowed for investment in research and resulted in much innovation, which we can all take advantage of. However, in order to play these games in real time in a virtual world, graphics processing speed, and quality, still needed to be improved.
Answering the call, companies like NVIDIA have had to come up with schemes to speed up their graphics processing units, or GPUs. Now, with the addition of Artificial Intelligence (AI) to the mix, new worlds of realtime rendering at near photo-realistic quality are opening up. These graphics cards, once the domain of only the highest end desktop workstations, can now be added to relatively inexpensive laptops with the power to not only play in, but create virtual worlds.
One recent AI dependent breakthrough that got a lot of attention at SIGGRAPH is Ray Tracing. Although I’m not a gamer, and graphics technology can sometimes be challenging to wrap your head around, ray tracing was explained to me in these rather simple terms. It’s basically an attempt to emulate the way light works in the real world. Instead of creating pre-designed, or “baked-in” lighting for scenes in CGI, the simulated path of the light is traced, (or rather, millions of simulated light paths.)
As the light bounces off objects, it moves and interacts with their properties. For example, if it bounces from a glossy green surface, its hue should change. The resulting image improvement, while it may seem subtle, is what separates fake looking CGI from that which is truly believable.
Also growing out of the gaming industry is technology known as the Unreal Engine, first introduced in the late 1990s by Epic Games for the first-person shooter game Unreal. It has since been successfully used in a variety of other action genres, including other fighting games, “MMORPG,” (Massively Multiplayer Online Role-Playing Games), and “Platformers,” a video game genre where the player controlled character must jump and climb between suspended platforms while avoiding obstacles.
DCS member, and CEO of Stargate Studios, Sam Nicholson, ASC, who has been Visual Effects Supervisor on productions from Star Trek: The Motion Picture to The Walking Dead, demoed a new VFX system at SIGGRAPH known as ThruView. It applies Unreal Engine technology by using camera/lens metadata to drive photorealistic CGI backgrounds. The output is displayed on monitors behind the subject and photographed in realtime as an alternative to a green/blue screen. The canvas need be no bigger than the display positioned behind the talent, but the background’s perspective can be computationally manipulated to create huge worlds that move and can interact with the camera to create virtual movement. See my interview with Sam at this year’s Cine Gear Expo for a better understanding of this amazing use of the technology: https://vimeo.com/341400643
It’s great to see all these demos with your own eyes, but in this case, seeing isn’t always believing. This technology is now going beyond reality which holds great promise for game developers and filmmakers to create believable visual worlds, but in the wrong hands could also be dangerous. What I’m referring to here are “deepfakes,” hyper-realistic manipulation of digital imagery that can alter images so effectively it’s nearly impossible to tell real from fake. It is more than just transplanting one person’s head onto another’s shoulders with Photoshop. We’re talking full animation of facial features cued from mo-cap sensors in real time.
It’s easy to see how this kind of technology can be useful in motion picture production; one example is when characters need to be “brought back to life” after the actor who played them has passed way. Remember Peter Cushing’s appearance in Rogue One or Paul Walker’s last scene in the Fast and Furious franchise, and know that it has only gotten exponentially more sophisticated since then. It’s fun to see Steve Buscemi as Jennifer Lawrence, or Obama’s words and face being manipulated by writer and comedian, Jordan Peele, (below).
However, in the political world, the potential to use the technology to disseminate false information by literally putting words in someone’s mouth is obvious. In this deeply polarized political environment, it is easy to imagine someone trying to use this technology for nefarious purposes; not to mention bogus sex tapes used to extort victims with the threat of public shaming. GPU-accelerated applications are becoming so sophisticated and accessible, that it is honestly getting to be a little scary.
Whether inspiring or scary, there was a lot to see at SIGGRAPH 2019. This 46th annual gathering concluded its week of demos, conferences, and high-level meetings in early August with nearly 19,000 computer graphics professionals in attendance from all over the world. Luckily for me, it was conveniently held in my home town, at the Los Angeles Convention Center. Sadly, I was not able to make it up to Canada for last year’s convention, but I’ll continue to attend whenever I can to keep up with the quickly evolving technologies of CG, Animation, VR, and Games as they become increasingly significant to the filmmaking process.
0 Comments