Adagio VR – Part 3: Character Art, Lighting and FX

Welcome back! This is the third post in a four-part series about building a short, cinematic VR experience in Unreal Engine. Across the entire series, we’ll be going over the best practices for creating scenes in real-time engines and sharing some fun, tips, tricks and time-savers  that you can reuse later down the road in your own projects. Let’s jump back in!

Importing a Skeletal Mesh into Unreal

The crux of our experience is the encounter the viewer has with a ghosts who briefly arises from her grave to peer one last time at the dawn breaking before moving on to distant shores. It’s a bit of a “If a tree falls in a forest” situation, and we need a character with a full range of motion to be able to convey that feeling through movement only.

Unreal has two classifications for imported mesh files: Static Meshes (like our environment assets from the last post of the ghost’s grave we just imported, displayed above) or Skeletal Meshes; assets that contain skeletal data. Fortunately for us, the import process for skeletal meshes is pretty similar to regular static meshes.

On import, Unreal will scan the asset file and recognize that it contains skeletal data. It’ll automatically check the “Skeletal Mesh” checkbox, and unless you already have the same mesh in your project it’ll automatically create a Skeleton asset file for your character. This particularity is shared by most if not all real-time engines, as skeleton and rigs are pretty weighty assets – and in our case Unreal would only import the animation on the file we just imported if our ghost mesh was already in our project.


The Skeletal Mesh asset window is similar to the Static Mesh window, with the exception that we can preview and select the various bones in our skeletal mesh. If you have an animation embedded inside the file, Unreal (and Unity if that’s what you’re using) will also take care to extract the animation to a separate asset type. This is the case for our ghost character, as we only really needed one animation for the experience.

Dynamic Material Parameters for our Character Assets

For our ghost character, we’ll have to create a new material since we have three new shader requirements:

  • We want our ghost to be transparent with a fresnel effect

  • We want our ghost to blend with surrounding geometry

  • We want to be able to modulate the opacity of the ghost via blueprints

Thanks to a few very convenient pre-integrated material nodes, achieving these goals is more than simple. We simply have to blend a Fresnel node with a Depth Fade node, then multiply our result with a Material Parameter Collection.


This asset type allows us to modify parameters in many materials at runtime, and will allow us to blend our ghost in and out of the world at different parts of the experience.

We’ll come back to animating materials via blueprints in the final post, but for now it’s just convenient to have it in already. Since we also want to animate the glowing runes on our grave, it’s good to note that we also made a similar material for our grave mesh.

Tweaking Engine Content to Our Needs

While the skybox that Unreal supplies is pretty good, the way it functions out-of-the-box won’t do for our purposes. In our experience, we want the player to start in darkness and witness the dawn break over the hills. There’s a public variable inside the blueprint that we can use to modulate that very effect, but it’s not available to edit in the Unreal sequencing tool out of the box. Fortunately, exposing that variable is simple enough; you just have to enter the blueprint edit view and tick the “Expose to Cinematics” checkbox.



With that parameter exposed, we’re now able to animate the sun rising in the Level Sequence we’ll assemble in the next post. As with the opacity modulation on our ghost material, we’re not going to get into animating it yet, but it’s again good to know that it’s available to us now.

Stationary Skylights

Taking in account that we’re making a VR experience, having a fully dynamic sunrise might seem kind of daunting. VR games and shorts are notoriously power-hungry, and it’s no surprise that games like Job Simulator or experiences like Lost take a stylized approach to their art direction to mitigate performance hurdles.Fortunately for us, we’ve been pretty keen on optimization and our project is still relatively bare-bones, so we’ll easily be able to tweak our sky material and our stationary Skylight to our liking with no performance loss.

As with many other things, you’re probably not best served here if you’re looking for the tradeoffs of static vs. moveable lights (I would invite you to refer to Unreal’s documentation for that), but it is important to note that when using real-time engines it’s very important to full take in account the role that each light is plays in the scene, and how each can be as optimized as possible. Ever since modern game engines switched to using deferred renderers (Unity Technologies has a great breakdown of Deferred versus Forward here), the number of dynamics lights is less of a hurdle, but it’s always good to remain vigilant.

Fog: Best Practices and VR Tradeoffs

Fog really can make or break a scene. It’s a great way to create depth in a scene with many visually-similar elements (like our own) but it’s also paradoxically a way to “crush” the color palette of whatever’s on screen if you’re not careful. Fog has been a hallmark of real-time engine progress, from primitive linear fog aiding both atmosphere and performance in Silent Hill to NVIDIA’s impressive FLeX demos.

While both Unreal and Unity feature fully fledged volumetric fog systems, we’ll only be using Unreal’s Exponential Height Fog actor for this project. While volumetric fog always looks impressive, its complex calculations render it very performance hungry and the way it’s implemented renders it incompatible with Single-Pass VR, the rendering standard for most modern VR games and experience (including our own).

There’s not much else to add about fog for this project that we won’t breach when we start animating it in the last post. As with everything else, remember to be judicious when using it!

Post-Processing: The Great Equalizer


Most real-time engines nowadays feature a fully-fledged post-processing system, often under the shape of a customizable stack of effects and rendering tricks that can be blended and animated throughout the level. While Unreal’s post-processing system is a little rougher around the edges than Unity’s, it’s still very much serviceable.

Setting things up is pretty straightforward if you’ve ever worked in video editing software. Both Unreal and Unity support the ACES color space and offer tweaks standard to any image format like saturation, contrast and tone-mapping. It’s important to know that with VR, many  staple rendering tricks featured in modern games like chromatic aberration or screen-space reflection half-work or don’t work at all. All of these issues can be attributed to two particularities of VR rendering:

  • We’re rendering through digital eyes, not digital cameras. Unless you’re simulating retinal detachment, things like bloom dirt masks and render masks cause great discomfort.

  • Some effects like screen-space reflection are only approximations of the real thing, and when displayed with VR depth effect may cause discomfort or just plain look like an MC Escher piece. Rendering your project in Two-Pass Stereo Rendering may alleviate these issues, but at the cost of twice the previous rendering overhead.

For our experience we’re only tweaking the basics a little; we’re adding some well-deserved ambient occlusion and adjusting the color grading to add a little more contrast. Depending on your scene, you might find it helpful to enter the buffer visualization mode for some passes like ambient occlusion to see how your various post processing settings affect your scene.

It’s also good to know that the modular nature of these system often allow the user to add their own effects. This is the case for both Unity and Unreal, and we’ve taken the advantage of this possibility by creating a little sharpening material to add bit more grain to our scene. You can get it here, and add it in the “Post-Process Materials” sections of the Post-Process component.

With our color grading complete, we now have both our environment art, our character and our lighting ready for animating through the Unreal Sequencer, what we’ll be doing in the final part of this tutorial series. We’ll also be covering odds and ends of our project, like creating various blueprints for sequence control and making sure that your final output is tidily packaged. See you soon!

Previous
Previous

Our SIGGRAPH 2019 Experience

Next
Next

Adagio VR – Part 4: Animation & Sequencing