The Art of the Key: Real-Time Green Screen Compositing on Set

In our last technical post, we walked through how tracking data flows from our Mo-Sys StarTracker through to the final rendered frame in Unreal Engine. But there’s a whole other side to what makes our green screen virtual production pipeline work: the composite itself. Getting a clean, convincing key — in real time, on set, with the director watching — is where a lot of the craft lives.

This post is a look at how we approach real-time compositing at Vedri North Wales. Not the theory of chroma keying (there are plenty of textbooks for that), but the practical, session-by-session decisions that determine whether your foreground talent looks like they belong in the virtual world or like they’ve been pasted on top of it.

Why Real-Time Matters

Traditional green screen compositing happens in post. You shoot the talent, shoot the plates or build the CG environments, then spend days or weeks in Nuke or After Effects pulling keys, suppressing spill, matching colour, and finessing edges. It works — and for complex VFX shots it’s still the right approach.

But for the kinds of productions we handle — commercials, music videos, corporate films, short-form narrative — there’s an enormous advantage to compositing live on set. The director sees a near-final image in the monitor. The DoP can adjust their lighting based on how the foreground sits against the virtual background. Decisions that would normally take weeks of back-and-forth in post get made in minutes. The shoot wraps with something very close to a finished shot.

That’s what our pipeline is built around. Camera tracking data from the StarTracker feeds into Unreal Engine, which renders the virtual environment from the correct perspective in real time. The camera feed goes through a keyer that strips out the green, and the two layers are combined into a composite output. All of this happens live, with latency measured in frames rather than days.

Pulling a Clean Key Starts Before the Shoot

The single biggest factor in compositing quality isn’t software — it’s how well the green screen is lit. We’ve written before about our lighting approach, but it bears repeating in this context: an unevenly lit screen is the source of most keying problems. Hot spots create areas where the green clips and loses detail. Shadows create areas where the green is too dark and starts to match skin tones or costume elements.

We aim for our green screen to be lit within half a stop of uniformity across its entire surface. We check this with a waveform monitor before every shoot, not just by eye. Once the screen is even, the keyer has a consistent colour range to work with, which means fewer edge artefacts, less spill, and a more forgiving setup for talent movement.

Wardrobe is the other pre-shoot conversation that saves hours of trouble. We always flag with the production team early: avoid green, obviously, but also avoid fine detail that the keyer will struggle with. Thin strands of hair, translucent fabrics, intricate jewellery — these aren’t impossible to key, but they require more aggressive settings that can introduce other problems. A quick wardrobe check saves everyone from discovering issues mid-take.

The Keying Chain

Our keying runs through a processing chain that handles several stages in sequence. The first stage is the core chroma extraction — identifying which pixels are green screen and which are foreground. We work with a tight key (the definitely-foreground area) and a loose key (the full extent including semi-transparent edges), which gives us a clean core matte with soft, natural edges where hair and fabric meet the background.

The second stage is spill suppression. Green screen bounce is inevitable — green light reflects off the screen onto the talent, giving skin and clothing a green tint, particularly on edges and in shadows. Our suppression process identifies and neutralises that green contamination without shifting the overall colour of the foreground. Getting this right is critical. Over-suppress and you introduce magenta fringing. Under-suppress and the talent looks like they’re standing in a green room, which they are, but the audience shouldn’t know that.

The third stage is edge refinement. Even with a well-lit screen, the boundary between foreground and background is where composites live or die. We apply subtle edge softening and colour correction along the matte boundary to blend the foreground into the virtual environment. This is where the look of the virtual background matters — an edge treatment that works against a bright daylight exterior will look wrong against a dark interior. We adjust per-shot.

Colour Matching: The Invisible Work

A technically perfect key still looks wrong if the colour spaces don’t match. The camera captures the real world in one colour profile. Unreal Engine renders its virtual world in another. The composite needs both to feel like they exist in the same space, under the same light.

We handle this at several levels. The Blackmagic Pyxis 6K shoots in a log colour space that preserves maximum dynamic range. Unreal Engine outputs in a linear or ACES pipeline depending on the project. Before the two meet in the composite, we apply colour transforms so they’re working in the same space. We also colour-grade the virtual environment to match the on-set lighting conditions — if the key light on set is warm and directional, the virtual scene needs to reflect that, or the composite will feel disconnected no matter how clean the key is.

This is one area where real-time compositing actually has an advantage over post workflows. Because the DoP can see the composite live, they can make lighting adjustments on set that improve the match. In post, you’re working with locked footage and trying to bend the CG to fit. On set, both sides can adapt.

What the Monitor Shows vs What Gets Delivered

One thing we’re always upfront about: the real-time composite is not the final deliverable. It’s very close — close enough to make creative decisions with confidence — but it’s a preview-quality output. The final delivery goes through an offline grade where we can refine the key at full resolution, apply final colour correction, and handle any edge cases that needed more attention than the live setup could give them.

For many of our projects, the gap between the on-set composite and the final grade is small. A commercial with clean talent against a well-designed virtual set might need only minor adjustments. A more complex shot with rain elements, interactive lighting, or fast camera movement might need a heavier post pass. We plan for both and make sure the client knows what to expect.

Getting Better Every Shoot

Real-time compositing in a green screen VP environment is still a relatively young discipline. The tools are improving rapidly — keying algorithms are getting smarter, GPU performance keeps climbing, and the integration between camera systems and render engines gets tighter with every update. We treat every shoot as a chance to refine our approach, test new settings, and feed what we learn back into our pipeline.

If you’re planning a production and wondering whether real-time green screen compositing is the right approach for your project, we’re always happy to talk through the specifics. Every shoot is different, and the best way to understand what’s possible is to have that conversation early.

MORE ARTICLES