How ESPN and ILMxLAB took a trip to the Galaxy’s Edge: Part Two

Star Wars: Tales from the Galaxy’s Edge gives players the chance to explore entirely new locations and areas of the planet Batuu as they embark on a thrilling adventure in virtual reality. To celebrate the release of the game, the team at ILMxLAB collaborated with ESPN to create a television spot that let ESPN hosts Stan Verrett and Chiney Ogwumike actually visit one of the game’s most iconic locations: Seezelslak’s Cantina. While there, the duo encounter the boisterous bartender himself, Verrett tries his hand at a game of repulsor darts, and they take in the unforgettable view of the Millennium Falcon — with Ogwumike proving her Star Wars bona fides by discussing the famous ship.

To create the spot, director Jonathan Klein (ESPN’s award-winning “Jingle Hoops” commercial) collaborated with the visual effects supervisor, ILMxLAB’s Tim Alexander, to bring Tales from the Galaxy’s Edge out of VR and onto television screens. Melding live-action footage, the incredible design and environmental work of the game, and impeccable visual effects, the spot captures the spirit of Tales while also serving as a compelling piece of commercial filmmaking unto itself.

In the last installment of this making-of series, we talk with Tim Alexander about the visual effects process, how to make a VR Seezelslak talk to live actors, and the Star Wars movie that provided the commercial’s Millenium Falcon.

What is the timeframe to complete a project like this, and when in the process do you come on board?

Since this was a marketing initiative, the timelines are generally a lot shorter than many projects we collaborate on. I came onboard roughly six weeks ahead of the delivery date. And then the due date pushed a few times, as they revised their plan to shoot the spot in a COVID-compliant and safe manner. So the schedule actually slipped a couple of weeks later.

What kind of pieces are in place at that point? Are you working from a creative brief or storyboards?

For this particular spot there was a rough creative brief that had been done, and then we came in and started storyboarding it. We actually just took some captures from Tales from the Galaxy’s Edge in-engine and laid them out. And then, as more and more people came on from the ESPN side, their writer got involved and a director, Jonathan Klein, as well. And we worked together and fleshed out the story even more and came up with some other ideas.

At that point, it became more of a conversation about what the project could be, and how many shots we could do in the time allotted, and those kinds of things. So there was definitely a creative round, which is needed. A writer and a director come on, they’ve got a point of view, so it was really helpful to have them do that. Plus, the writer was from ESPN, so he knew the talent. He knew what would play to their strengths and make for the most compelling action.

After everyone’s on the same page about what you’re going to do, how do you break down how you’re going to execute it?

So, what we just discussed, that’s called pre-production. That’s the phase where you’re getting the story together, you’re figuring everything out, and then we plan the shoot. Then we move into production, and we go and actually shoot all of the scenes and elements we’ll need for the final spot— in this case, the green screen elements. And then we go into what we call post-production, which would be where we put the shots together and deliver them. Then they do a final, what they call “conform,” or DI [digital intermediate] or color pass, and then it gets released for broadcast.

So the next step was that we planned the shoot, which means we’re thinking, “Okay, we’ve got to do these six shots. They’re green screens. They’re approximately from these angles. These people are in them, and they’re going to say these particular lines.” We also needed to have a prop there. For example, if Stan is going to throw a dart, we need to make a dart and have it shipped to set ahead of time. Basically, you go shot-by-shot and break it down, making sure that you’ve got everything that you need to shoot that plate on that day.

And then meanwhile, our producer is starting to schedule our internal team for post-production to actually put the shots together. We start discussing, “Okay, if the plates come in on this day, then we can start the layout artists on this day, and then we can start the compositors on this day.” We start laying out a schedule and book the artists so we’re ready to go, so once the shoot is done, we can actually start doing the [effects work on the] shots.

That does seem like a very traditional, film-style workflow.

Certainly. And that’s something that Industrial Light & Magic specializes in, so in this case we actually asked for ILM production [personnel] because they know how to ingest plates, get them online, and put them in the right location so the compositors can start extracting the green screens. Meanwhile, we’re making background plates and doing the compositing. That’s very, very traditional ILM visual effects.

One of the really interesting angles on this spot is that you’re placing these actors in a world that already exists in virtual reality. How did that impact your approach?

That’s where it does branch from the standard ILM visual effects [workflow], because we did actually use the real-time scene [from Tales from the Galaxy’s Edge] in Unreal to render the background plates. Whereas if we had done this as a traditional ILM project, we would have probably ingested the assets [from the game], worked on them, changed them, and then rendered them through the traditional pipeline. In this case, though, we actually stayed in Unreal Engine, and we used the game assets for everything other than the shots where we look out the window of the cantina.

We determined after looking at everything that the cantina asset as built for the game was pretty great, and quite high-res, and looked good when we started rendering it for TV. So we didn’t actually have to go in and change that asset. But what we did do is, since we didn’t actually have to be real time and be in-game, we did turn up some of the settings. So we added some atmosphere, and we added a bit of depth of field to it, and we kind of made it look a little bit more cinematic. They rendered in near real-time, and so what we got out of the cantina looks better just because we spent a little bit more time rendering it.

When we looked out the window [overlooking Black Spire Outpost], we thought, “That doesn’t look quite good enough; that’s not going to hold up [on television].” So in that case we actually exported that geometry and those textures, and sent them over to the ILM environments department. And they worked on it, upgraded it, and made it higher fidelity for the actual spot.

Did that include all of the buildings, the Falcon, and the other elements?

Yes, those come from the game itself. The artists opened up Unreal, they loaded that level as if you would if you were going to work on the game, and then they did an export that way.

Now the Falcon — it turns out there’s a lot of different Falcons out there, so we actually replaced the Falcon with one from ILM. There’s a whole digital backlot created by ILM that Lucasfilm holds, so for all of the films, when they’re done they pull assets and they put them into a database. So whenever we need, like, R2-D2, or C-3PO, or the Falcon, you can go to that database and see if it’s already been built, and at what resolution, and for what purpose. So we pulled the Falcon from one of the films, so that we had the highest resolution Millenium Falcon possible for the spot.

Which film was it from?

I suspect it was from the one that [Industrial Light & Magic’s] Rob Bredow and Pat Tubach supervised, the Solo movie. I’m pretty sure that’s where the asset came from.

So you’ve got the Falcon, you’ve got the geometry from the game. Were there other things you did to plus-up that view of Black Spire Outpost?

So unfortunately, when you export from Unreal like that, the textures come through in a different way. When you work in-engine, you do a lot of tricks, like you tile textures, meaning you reuse them. That comes up. That information doesn’t come through in export. So the artists actually had to kind of retexture that [location]. But they knew where all the buildings were, where the staircase was, where the Millennium Falcon was, all that stuff. It’s a fairly straightforward process, but they had to redo the texturing for that. Then it felt a little dead after that, so they went into their crowd system, and added some people and droids walking around out there, so that it looked more alive. And then additionally, when we got to the compositing phase, the compositor added steam, and smoke, and some other moving elements within the shot to give it a bit more life.

The other big element of the spot is Seezelslak himself. What was the process of moving him from the game and into this other context?

Here again, we tried to reuse directly from the game because the schedule didn’t afford time to recreate the asset. We had to both use the asset and the animation from the game. But because of the way we were rendering the backgrounds, we had to isolate that animation so that the person that was rendering the backgrounds could play the animation, and then render it, and we could also time sync it properly. Because when you’re in-game, everything’s just kind of running. There’s no way to really say, like, “Okay, start now and put the camera exactly here.” So we had to basically kind of freeze the game, if you will, put the [virtual] camera where we wanted it, and then bring in Seezelslak’s animation and play it back, so that it happened at the right moment from the right camera angle.

Is that a matter of getting these pieces in place, lining them up, and then playing them back inside the game engine where you can capture the shot with the virtual camera?

Well, you can do it that way, but that turns out to be difficult because it’s not easy to just place the camera exactly where you want, because when you hit “go” it sets the camera somewhere else because the game starts running, and all these simulations start happening. It’s really not controllable that way, so we actually go out of the game, and put it into what they call the Sequencer, where you can actually hit play, and have it start, and stop, and everything stops. It’s kind of like another mode [in Unreal], if you will.

But we have to do that because we needed to get the camera from the right perspective. It needed to match the green screen [footage]. So when we got the green screen in, we sent it to the [ILM] layout department. The layout department artists are the ones that line up the CG camera to the real camera in the real world. Once we have that CG camera lined up, then we can render whatever we want, and it looks like it’s the same perspective as the green screen plate.

After that we handed that camera off to Unreal, and then [ILM Virtual Production Supervisor] Ian Milham imports that camera, imports Seezelslak’s animation, and then could hit play and render the animation at the right time from the right camera angle.

Is that the kind of technique that you could see yourself using again in the future?

Yeah, absolutely. I don’t know if you’ve seen [footage of] people playing in VR, where you see them, and you see the VR stuff all around them so you can see what they’re seeing. That’s kind of what we were emulating here. So it opens up the door for helping people understand what’s happening in VR experiences for us, because it’s oftentimes really hard to express that, or show that. I think those sort of VR captures are one of the best ways to show that.

Because we can do that more in the future, if it’s the right type of marketing we can shoot a green screen, match-move the camera, and then put that into game and render it from those angles so we can do moving cameras. Most of those you see are from a static camera, but it doesn’t have to be if we’re going to match-move it.

You mentioned you used Seezelslak’s animation from the game. Did you need to tweak it for it to work in the spot?

No, and in fact we were pretty adamant that we weren’t going to tweak it, because the lip sync needed to be right on with the line that he was saying. It turned out, when we went to actually render that, that what he was doing acting-wise wasn’t very good [for the moment in the commercial], but we needed that line. So what we did was we found a spot later in his action that we liked, where he kind of steps in and puts his hand down on the bar. And we slip-synched just the facial animation to that point. So now we’ve just created a new piece of animation, which was his body from one moment, and his face from another moment, and you combined those two so we got the action that we wanted.

I would have expected there to be a lot of hands-on work there, especially with Seezelslak’s design and his different eyestalks.

There were some eyeline problems, so we also took his whole body, and rotated him so that when the two ESPN hosts are walking in, he was looking at them properly. When we first rendered it from the angle we’re supposed to be at, his eyeline was actually at camera, which works for the game because you want it to be right at the camera. But in the commercial, we actually needed it to be off-camera. So we rotated him a little bit to get his eyeline right. We made, if you will, easy changes to the animation, rather than going in and reanimating it completely.

In the spot, Stan Verrett picks up a repulsor dart and lets it fly towards the board. You mentioned fabricating a physical dart for the shoot. How did those pieces work together?

That was probably the most complex shot in the spot because he has to be physically holding a dart, and then throws it and the dart flies.

So he needs to either just have nothing in his hand, which is one way to do it, and then we track his hand, stick the CG dart into it, render it all the way through, and have it fly. The way we decided to do it, which is better, was to actually make a dart. So we took the dart model from the game, we 3D-printed it and got it painted so it looked awesome, so that on set he actually had the thing to pick up that is exactly the right size. So his fingers are now holding the dart properly. Without that there, he doesn’t really know how to hold it, and oftentimes people screw that up and then you’re reconstructing hands and doing all kinds of stuff after the fact to make it work.

He held the dart, and when he pulled back [to throw it], he just dropped it on the floor. And then at that moment, we take over with the CG dart and have it fly. Because it’s in motion, you can’t tell. But actually, if you look at the green screen plate, when he pulls back he just drops the dart out of his hand and then does the throw with an empty hand.

It’s funny because it does get complicated really fast for just the smallest things. But that was definitely the most complicated shot that we had to do. The other thing we had to do, because he picks up the dart [in the shot], was to have a holder on set that was exactly the right height, that we could put the dart in. So when he grabs the dart, it’s the same height as the CG dart holder that’s actually going to be in the background. So we had to make sure all of our measurements were correct.

But that’s what I mean by you have to go shot by shot. Look at it, break it down, make sure you’ve got everything there that you need to get it right when you shoot the green screen, because you get one chance at that. And if you don’t do it, there’s always ways to fix it, but it takes longer, takes more people, takes more money, that kind of thing. You have to really look at each shot and go, “Okay, how are we going to do this? What’s the best way to do this?” Sometimes the best way is not the least expensive way. It’s always a thought process to figure out what’s what’s best for the shot.

Buy Star Wars: Tales from the Galaxy’s Edge now for Oculus Quest.