Experimenting with Unity 5

Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scenes. A lot of this is coupled to their out of the box set-up, but it is pretty easy with Unity to write new shaders that take advantage of this new lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit since the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader that takes the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.

Progress Update

For the past five months, Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!

Creating Floor Plan Screenshots

As we craft the puzzle structure for RedFrame, it's very useful to have a birds-eye view of the environment so that we can better see how puzzles physically relate to one another. I spent some time over the weekend creating a simple Unity editor script that allows me to export two very large screenshots, one for each floor of our house environment. The script creates a new downward-facing camera, sets it to orthographic mode, and adjusts its near and far clips planes to cut out only the vertical section of the house that I'm interested in. It then manipulates the camera's projection matrix to produce an oblique projection. This oblique perspective makes it much easier to see walls and determine height, and has the fun side effect of making it feel like a top-down RPG. Rather than capturing an image with Unity's built-in Application.CaptureScreenshot method, I instead chose to render to a much larger off-screen RenderTexture with a square aspect ratio. This way I can guarantee that the resulting images will always be the same dimensions, regardless of how the Unity editor windows are set up.

I combined the two floor images in Photoshop as separate layers, and gave the top floor a slight drop shadow. I can easily toggle between the top and bottom floor by hiding the top layer. I've created additional layers in which I can create diagrams and notes. As the environment evolves, it'll be very easy to re-run the script in Unity, producing a new pair of screenshots that can be dropped into the same Photoshop file.

You can download my floor plan screenshot script here. It was written very quickly, so if you see room for improvement please let me know!

- Michael

RedFrame-House-Map

Environment Update

A few months ago I finished building and lighting the the RedFrame house environment. Not including bathrooms, the house has 17 furnished rooms, and a couple outdoor areas. The general look has changed a lot since we last showed a demo. I've started to use higher contrast in many areas, and the general color scheme of each room has converged into a unified style, making each room feel unique. Here's a quick tour of some of the areas that convey the main feel of the game.

-Andrew

Repurposing Old Systems

It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, a lot of our ideas reached a fairly playable state, only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

PortableObject FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

DropNode FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into. By making the fewest possible assumptions, it's able to be included in just about any project without having to be modified.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Michael

Global Managers With Generic Singletons

Global state and behavior can be a bit tricky to handle in Unity. RedFrame includes a few low-level systems that must always be accessible, so a robust solution is required. While there is no single solution to the problem, there is one particular approach that I've found most elegant.

There are many reasons one might need global state: controlling menu logic, building additional engine code on top of Unity, executing coroutines that control simulations across level loads, and so on. By design, all code executed in Unity at runtime must be attached to GameObjects as script components, and GameObjects must exist in the hierarchy of a scene. There is no concept of low-level application code outside of the core Unity engine – there are only objects and their individual behaviors. The most common approach to implementing global managers in Unity is to create a prefab that has all manager scripts attached to it. You may have a music manager, an input manager, and dozens of other manager-like scripts stapled onto a single monolithic "GameManager" object. This prefab object would be included in the scene hierarchy in one of two ways:

  • Include the prefab in all scene files.
  • Include the prefab in the first scene, and call its DontDestroyOnLoad method during Awake, forcing it to survive future level loads.

Other scripts would then find references to these manager scripts during Start through one of a variety of built-in Unity methods, most notably FindWithTag and FindObjectOfType. You'd either find the game manager object in the scene and then drill down into its components to find individual manager scripts, or you'd scrape the entire scene to find manager scripts directly. A slightly more automated and potentially more performant option is to use singletons.

Singleton Pattern

The singleton design pattern facilitates global access to an object while ensuring that only one instance of the object ever exists at any one time. If an instance of the singleton doesn't exist when it is referenced, it will be instantiated on demand. For most C# applications, this is fairly straightforward to implement. In the following code, the static Instance property may be used to access the global instance of the Singleton class:

C# Singleton

public class Singleton
{
    static Singleton instance;
 
    public static Singleton Instance {
        get {
            if (instance == null) {
                instance = new Singleton ();
            }
            return instance;
        }
    }
}

Unity unfortunately adds some complication to this approach. All executable code must be attached to GameObjects, so not only must an instance of a singleton object always exist, but it must also exist someplace in the scene. The following Unity singleton implementation will ensure that the script is instantiated in the scene:

Unity Singleton

public class UnitySingleton : MonoBehaviour
{
    static UnitySingleton instance;
 
    public static UnitySingleton Instance {
        get {
            if (instance == null) {
                instance = FindObjectOfType<UnitySingleton> ();
                if (instance == null) {
                    GameObject obj = new GameObject ();
                    obj.hideFlags = HideFlags.HideAndDontSave;
                    instance = obj.AddComponent<UnitySingleton> ();
                }
            }
            return instance;
        }
    }
}

The above implementation first searches for an instance of the UnitySingleton component in the scene if a reference doesn't already exist. If it doesn't find a UnitySingleton component, a hidden GameObject is created and a UnitySingleton component is attached to it. In the event that the UnitySingleton component or its parent GameObject is destroyed, the next call to UnitySingleton.Instance will instantiate a new GameObject and UnitySingleton component. For games that include many manager scripts, it can be a pain to copy and paste this boilerplate code into each new class. By leveraging C#'s support for generic classes, we can create a generic base class for all GameObject-based singletons to inherit from:

Generic Unity Singleton

public class UnitySingleton : MonoBehaviour
    where T : Component
{
    private static T instance;
    public static T Instance {
        get {
            if (instance == null) {
                instance = FindObjectOfType<T> ();
                if (instance == null) {
                    GameObject obj = new GameObject ();
                    obj.hideFlags = HideFlags.HideAndDontSave;
                    instance = obj.AddComponent<T> ();
                }
            }
            return instance;
        }
    }
}

A base class is generally unable to know about any of its sub-classes. This is very problematic when inheriting from a singleton base class – for the sake of example lets call one such sub-class "Manager". The value of Manager.Instance would be a UnitySingleton object instead of its own sub-type, effectively hiding all of Manager's public members. By converting UnitySingleton to a generic class as seen above, we are able to change an inheriting class's Instance from the base type to the inheriting type. When we declare our Manager class, we must pass its own type to UnityManager<T> as a generic parameter: public class Manager : UnitySingleton<Manager>. That's it! Simply by inheriting from this special singleton class, we've turned Manager into a singleton. There is one remaining issue: persistence. As soon as a new scene is loaded, all singleton objects are destroyed. If these objects are responsible for maintaining state, that state will be lost. While a non-persistent Unity singleton works just fine in many cases, we need to have one additional singleton class in our toolbox:

Persistent Generic Unity Singleton

public class UnitySingletonPersistent : MonoBehaviour
    where T : Component
{
    private static T instance;
    public static T Instance {
        get {
            if (instance == null) {
                instance = FindObjectOfType<T> ();
                if (instance == null) {
                    GameObject obj = new GameObject ();
                    obj.hideFlags = HideFlags.HideAndDontSave;
                    instance = obj.AddComponent<T> ();
                }
            }
            return instance;
        }
    }
 
    public virtual void Awake ()
    {
        DontDestroyOnLoad (this.gameObject);
        if (instance == null) {
            instance = this as T;
        } else {
            Destroy (gameObject);
        }
    }
}

The preceding code will create an object that persists between levels. Duplicate copies may be instantiated if the singleton had been embedded in multiple scenes, so this code will also destroy any additional copies it finds.

Caveats

There are a few important issues to be aware of with this approach to creating singletons in Unity:

Leaking Singleton Objects

If a MonoBehaviour references a singleton during its OnDestroy or OnDisable while running in the editor, the singleton object that was instantiated at runtime will leak into the scene when playback is stopped. OnDestroy and OnDisable are called by Unity when cleaning up the scene in an attempt to return the scene to its pre-playmode state. If a singleton object is destroyed before another scripts references it through its Instance property, the singleton object will be re-instantiated after Unity expected it to have been permanently destroyed. Unity will warn you of this in very clear language, so keep an eye out for it. One possible solution is to set a boolean flag during OnApplicationQuit that is used to conditionally bypass all singleton references included in OnDestroy and OnDisable.

Execution Order

The order in which objects have their Awake and Start methods called is not predictable by default. Persistent singletons are especially susceptible to execution ordering issues. If multiple copies of a singleton exist in the scene, one may destroy the other copies after those copies have had their Awake methods called. If game state is changed during Awake, this may cause unexpected behavior. As a general rule, Awake should only ever be used to set up the internal state of an object. Any external object communication should occur during Start. Persistent singletons require strict use of this convention.

Conclusion

While singletons are inherently awkward to implement in Unity, they're often a necessary component of a complex game. Some games may require many dozens of manager scripts, so it makes sense to reduce the amount of duplicated code and standardize on a method for setting up, referencing, and tearing down these managers. A generic singleton base class is one such solution that has served us well, but it is by no means perfect. It is a design pattern that we will continue to iterate on, hopefully discovering solutions that more cleanly integrate with Unity.

- Michael

Open Multiple App Instances in Mac OS X

We do all of our light baking for RedFrame on a beefy Mac Pro, but due to limitations in Maya and Mental Ray we have to run multiple instances in order to saturate the available processor cores. On Windows it's very simple to run multiple instances of a single application – this is the default behavior – but we work on OS X which only allows one instance of an app to be running at any given time. We've commonly used a messy workaround: duplicating the application on disk and keeping references to its many copies in the Dock.

Today I discovered a much better solution. It's possible to open an unlimited number of instances of an app through the terminal. To instantiate Maya 2012, I just execute the following command:

open -n /Applications/Autodesk/maya2012/Maya.app

Using Platypus I bundled this command into an application that sits in the Dock, ready to spawn additional Maya instances on demand.

- Michael

Advanced lightmapping in Unity

Note: Lightmapping Extended is no longer compatible with Unity since the Beast light baking system has been removed from the engine.

While investigating potential lightmapping solutions for RedFrame, we explored Unity's own lightmapping system which leverages Autodesk's Beast. Beast unfortunately is lacking a few more obscure features useful for simulating realistic artificial indoor lighting, most notably photometric lights for reconstructing the unique banding patterns indicative of incandescent bulbs installed in housings. This prevents us from completely switching our workflow from Mental Ray to Beast, though we'll likely still use Beast for specific locations in the game that are favorable to Beast's feature set.

Beast is a quite a full-featured lightmapping solution in itself, however Unity's specific implementation of the tool favors simplicity over customization. Some very useful features are hidden away, and it's not immediately obvious how to enable them. To give Beast a fair evaluation, I needed to access them.

Unity fortunately is able to accept Beast XML configuration files, opening up nearly the full potential of the system. There are a plethora of additional options recognized by Beast, but only a limited number are documented by Unity. After a bit of digital archaeology I was able to unearth documents that revealed the missing parts of the API.

I've created a Unity editor tool called Lightmapping Extended  that implements the full Beast XML specification and presents all available (and compatible) options in a user-friendly UI. It's available on GitHub:

Download source code from GitHub

This tool unlocks a few key hidden features not available in Unity's built-in Lightmapping settings window:

  • Image-Based Lighting - light a scene with an HDR skybox, mimicking realistic outdoor lighting
  • Path Tracer GI - a fast, multi-bounce lighting solution
  • Monte Carlo GI - a very slow but extremely accurate lighting solution

Keep an eye on the Lightmapping Extended thread on the Unity forum for future updates. If you run into any issues, please let me know either through the blog comments or the Unity forum thread. I'd like to make this the best and most complete solution for lightmapping inside of Unity.

- Michael

Poly Reduction Prioritization

I recently published a tool for Unity that exposes additional settings for UnitThe central environment in RedFrame is a large mansion. While developing the 3d model of the house I didn't pay much attention to its total resolution; I wanted to see how far I could push mid-range hardware and didn't want the design of the environment to be influenced by technical considerations. To our delight the house runs completely smoothly on an ATI Radeon HD 5770 with a gig of video memory. Although this video card is no slouch, it's also not a high-end gaming GPU.

The resolution of the house model was originally 1,371,298 vertices. We're going to expand the environment quite a bit and will need to keep the game playable on as many systems as possible, so I've started the process of reducing the resolution of the Maya model as much as possible without negatively affecting the way it's perceived by the player. I realized that a lot of our detail was unnecessary; some of the tight detail even detracted from the image by causing flickering when anti-aliasing was disabled.

The scene is quite large, so prioritizing my time is a little difficult. My first thought was just to go through each room looking for objects that are more detailed that they need to be, but this is somewhat arbitrary. My second technique has been to print a list of all objects in the scene and then order them by how much resolution they have. It is still arbitrary in a sense, but it has been a nice weapon with which to attack the problem.

Because I'm more comfortable programing in Unity than in MEL, I wrote a C# script to sort models by resolution. It's my first time using Linq, which I still need to wrap my head around. You can download the script here - just import your model into Unity, drop it into a new scene, and attach the script to the model's root game object.

-Andrew

Lightmap Workflow, Part 1: UV Generation

RedFrame's lighting tends to look a bit different than most games. We achieve this unique look by generating most lighting externally using techniques inspired by pre-rendered architectural visualization. We set up and bake our lighting in Maya and Mental Ray rather than leveraging Unity's built-in lightmap rendering tools.

Our current workflow is a three-step process: generate lightmap UVs, bake direct and/or indirect lighting, and import the resulting lightmap images into Unity's existing lightmap system. In Part 1 in our series on lightmapping we'll explore the process of generating lightmap UVs.

Approach

Unity includes an automatic lightmap UV generation tool. This is a one-way process, and it would be impractical to transfer these UVs back into Maya. Regardless of this limitation, we take a philosophically different approach in our workflow. Where Unity embraces automated simplicity, we've chosen manual control. Our workflow produces two important advantages for us: it creates model files that contain intrinsic lightmap UVs that may be used by other applications and engines, and it offers deep control over how objects are divided at a face level which can produce higher quality results with fewer visual artifacts.

Mesh Grouping

We begin our process of building lightmap UVs by merging environmental geometry into localized groups. The objects in each group will all share the same lightmap texture. Each of these groups are about a quarter the size of a room.

lightmap-uvs-figure1

To optimize the use of texture memory in Unity, we don't want to generate lightmaps for every individual piece of geometry. Separate objects are able to share a single lightmap provided that none of their UVs overlap. To ensure that the objects share a unique UV space, we temporarily merge every object within a group into a single mesh.

Manual UV Layout

Once we have a single mesh for a group of objects, we first must create a new UV set for the mesh. We will want have individual control over the UV layout for both the color and light maps - the second UV set will be used for lightmapping.

In the newly created UV channel, run an automatic UV generation in Maya. This is done by selecting Create UVs -> Automatic Mapping from the polygon menus. The UV map that is generated is usually fairly efficient but it can be compacted further by cutting UV edges that form right angles and then running a Layout operation.

lightmap-uvs-figure2

In this image, the areas circled are spots where it would be a good idea to cut UV edges:

lightmap-uvs-figure3

Manually separating UV shells for smooth objects with hard corners such as crown molding, or softer organic shapes such as upholstered furniture, can also minimize artifacts in lightmaps. Artifacts can be further reduced by tweaking the position of individual vertices as needed.

lightmap-uvs-figure4

Mesh Breakup

Once the mesh containing a group of objects has had its UVs efficiently laid out, we break the mesh into smaller pieces so that they can be culled by Unity via frustum culling and occlusion culling. The most sensible way we've found to re-divide each group mesh is by material. We wrote a MEL script to automatically do this, and we've made it available here.

This script will separate a mesh into pieces based on its materials, and will place each piece into a group node containing all meshes that share the same lightmap UV space. The script is a little ad-hoc and will break if one of the contained materials is the default material. Any suggestions on how we might improve the script are welcome.

Unity Import

Unity uses two UV channels per mesh, the first for displaying color maps and the second for lightmaps. When importing a model, Unity will automatically read the mesh's second UV channel. Be certain to disable "generate lightmap UVs" in the model's asset import settings, otherwise Unity will overwrite the UVs that were manually laid out in Maya.

Check out the next part in our series: Lightmap Workflow, Part 2: Architectural Lighting

Welcome to the RedFrame Development Blog

RedFrame is an exploratory adventure game in production by Andrew Coggeshall and Michael Stevenson. Weve been working on RedFrame in our spare time for more than two years, but until now weve largely kept our work under wraps. Wed like to begin sharing with you what weve accomplished so far, and what still lies ahead as we continue crafting the world of RedFrame.

Through this blog well be highlighting major aspects of development, giving you a peek into our process. This blog will be written from a technical perspective and will be spoiler free. If you have any questions or comments, feel free to contact us.

Thanks, and we hope you enjoy!

Michael & Andrew