Howdy. Andrew here, CTO of Glassbox. Last week, Norman (Glassbox CEO) and I had the pleasure of attending one of The Foundry’s latest Global Roadshow events in Los Angeles, hosted by OpenDrives. We had a great time talking tech with some very brainy folks and wanted to share some of the things we learned, and how it could shape the VFX industry.

The event allowed industry technologists a chance to discuss some of the latest trends in media & entertainment production technology. While many of the conversations were focused on the latest offerings inside Nuke, they were also a platform from which to discuss the near and far future of production technologies and practices.

Of VFX Standards

Compliance with user workflows, with a focus on open development initiatives, was one of the pillars of the Roadshow presentation: volumetric (OpenVDB), geometry (Alembic), and data (USD). Of the standards that are being introduced into the Foundry, I’m particularly interested in the work of USD, being one data format to govern all other scene description standards. Conceptually, the USD format would contain data from alembic and openVDB, in addition to its own storage logic for non-geometric and non-volumetric data. Hopefully I’ll see the full scale adoption of this tech in my professional career, but with it comes a shift in how VFX software is designed because the walled gardens between the applications lowers a step with every adoption of an open standard. This democratization of data is very good for the end user, because it can cause developers to focus more on how their software interacts with data, instead of maintaining archaic file formats that don’t play nice. Additionally, we may see a reduction in the number of file format specific bugs as an open data standard arises. I know we’ve all dealt with at least one file format that aged us by at least five years.

AI Meets Roto

Another great feature of the evening incorporated AI, and did so in a very clever manner. By using the user’s choice of trained image classification models, Nuke is able to create a solid first pass roto for feature objects. Because of Nuke’s scriptability and plugin support, hopefully this means a pipeline dev can run Nuke through Deadline to create roto masks, launched by a Shotgun event plugin on ingest, and load those masks into the compositor’s final Nuke script, and export to AE, Flame, Resolve, and final editing. Sure, an artist will still have to clean it up, and your pipeline tech may have to script out certain image tags (such as: roto cats, but not dogs), but the time to clean up is minimal based on the results showed at the Global Roadshow event. Plus, you can have your R&D group create their own classification set and use that, if you aren’t getting the results you need with some of the standard classification sets (e.g.: Google’s Open Image dataset)). Additionally, with your own training set, you might be able to create mattes and masks just for your actors, individually. Truly a great step forward for compositors and pipeline developers.

Real-Time

With all the conversation about post-production tools, the conversations also turned to real-time pipelines and how they will continue to affect production. “Real-time” can be one of those nebulous buzzwords. Everyone wants it, but what’s acceptable for real-time? Does it mean immediate rendering of a 3D scene at 30 fps minimum with 60p fps the middle standard and 120p fps as the gold standard? Or, is it more abstracted into the concept of a real-time pipeline. Imagine the rendering of your data, scenes, messages, event handlers, geometry instances, post renders, matte generations all happening in real-time, where your pipeline is immediately responsive from beginning to end and delivering final shots to client is done on set, at the end of shooting, and handed off to editorial at the same time as the camera raw files. This is what the whole industry is working toward. Right now, we’re seeing major leaps in rendering speed increases for photorealistic process, thanks in large part to NVIDIA’s RTX and real-time, AI based, denoiser process. But, there’s so much more that has to happen. Currently, two artists aren’t able to work on the same scene, simultaneously, from two different applications. This immediate transmission of data would enable artists to achieve the goals of the director, DP, and producers in real-time. A perfect example would be when a modeler, or texture artist, must update a scene for use in a virtual production (or simulcam) shot. On set, you can’t ask the director to wait a minute while the artists work. I’ve seen what happens when the virtual operator(s) slow down production. They don’t last long, or the tech isn’t picked up the for the following season. On films, it’s more forgiving. For episodics, it won’t work. This is what our company is directly addressing, at this moment.

Live updates. From anywhere.

This is a large step forward in integrating live action and virtual production, and we are excited to enable it.

Thank you for reading. If you found this interesting, please be on the lookout for more posts from us, providing technology insights into production techniques, pipelines, and technologies.

Andrew Britton

CTO, Glassbox