RedFrame Library DK2 Demo

Screen Shot 2015-04-01 at 5.15.01 PM

We have had some recent success with the Oculus DK2 drivers and today are releasing a new RedFrame environment demo, available for download from the links below. This demo features the library, a key location in RedFrame and a nice companion to the master bedroom that we had released for DK1 last year. This is a slightly smaller environment (we don’t want to give too much away), but it contains some hints about what’s to come. I’ve included some of the new rendering techniques featured in my last post, Experimenting with Unity 5, which look very good in VR.

Before playing, be sure to specify your height and IPD in the Oculus Config application included with the Oculus Rift Runtime. The RedFrame environment is precisely modeled to scale which can magnify discrepancies in your virtual height. We’ve also included a “seated mode” (see controls below) that will approximately match your height when sitting in a desk chair, greatly increasing both immersion and sense of scale.

Download

Windows
Mac

Controls for Keyboard

  • Move – W, A, S, D
  • Turn – Q, E, or mouse
  • Sit – Space or Tab
  • Recenter View – R

Controls for Gamepad

  • Move – Left stick
  • Turn – Right stick or bumpers
  • Sit – A (Xbox), X (PS3/4)
  • Recenter View – Y (Xbox), Triangle (PS3/4)

Troubleshooting

Compared to the Oculus Rift DK1, the setup for DK2 can be a bit more complex. It’s hard to say how well it will run on every system, but we have a few tips that got it working well for us:

  1. On Windows, change your Oculus settings to use “Direct to Rift” mode instead of “Extended” mode, and run the Direct to Rift app included with our demo.
  2. Don’t mirror your display, it causes bad jitter.
  3. Update your graphics drivers after verifying that they’re compatible with your Rift.
  4. If the frame rate doesn’t feel smooth, relaunch the app and select the lower quality setting. The two presets we’ve included should perform well on most computers.
  5. If your screen goes black, it may be because your head passed through an object. This is a feature we added handle collisions with head tracking.
  6. If the screen shows up tiny in the corner, make sure the resolution is set to 1920 x 1080 on launch.
  7. Sometimes with the OSX build, the cursor won’t hide. If this happens you can just drag it to the top of the screen.

Since this is still a work in progress and far from perfect, if you have trouble please let us know about your experience. Your feedback is very helpful!

Posted in Uncategorized

Experimenting with Unity 5

Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scenes. A lot of this is coupled to their out of the box set-up, find but it is pretty easy with Unity to write new shaders that take advantage of this new lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit since the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader that takes the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.

Posted in Uncategorized

Progress Update

For the past five months, pharmacy Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, cheap building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, prescription I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall

Posted in Uncategorized

Enter VR Podcast

Recently, Mike and I had the pleasure to speaking with Cris Miranda who hosts a podcast entitled Enter VR. We chatted about RedFrame as well as VR in general. It was a lot of fun and we were able to verbalize a lot of things we had been thinking about with the game as well as other projects we’d like to do in the future.

 

Check it out here.

Posted in Uncategorized

RedFrame Oculus Rift Demo!

Screen

It’s been a while since we posted anything about RedFrame – we took a short break to avoid burnout, and have been creatively re-energized by focusing on other work for a while. We’re gearing up to do a lot of work on the game in 2014 and will have exciting new things to show you. To kick off the new year we’re releasing our first Oculus Rift demo in which you can experience one small piece of our environment, the master bedroom.

Both Mike and I have Oculus Rift developer kits and have been very excited to see the RedFrame environment in VR. In fact, VR is such a qualitatively different experience that we’re adjusting many of our design decisions to better support it. RedFrame feels like it always was meant to be a VR experience, and the technology has finally arrived to support it!

This demo is intended to provide the general flavor of the experience that we want to create, rather than demonstrating gameplay (there are no puzzles or interaction). We’ve also included a new track from our musician, Notious, who has been creating wonderful compositions for the game. You can check out his other work here.

We’d love to hear what you think!

Download

Windows
Mac

Instructions

The RedFrame Oculus demo is best experienced with a gamepad. We support Xbox, PS3, and PS4 controller on every platform.

Before playing, be sure to specify your player height and IPD in the Oculus Config Util included in the Oculus Rift SDK.

We’ve also included a “sitting mode” that simulates your height while sitting in a desk chair. We’ve found that this greatly improves realism by matching the floor that you see the floor that you feel with your feet.

Controls

  • Left stick or WASD keys to move
  • Right stick or mouse to look
  • Left trigger or shift key to fast walk
  • A (Xbox), X (PS3/4), or tab to toggle sitting mode
  • Return/Enter key to toggle VR display
Posted in Uncategorized

Amplify Texture Plug-In for Unity

Screen Shot 2013-07-26 at 11.14.46 PMWe haven’t really used or needed a lot of plug-ins for RedFrame thus far, however there is one we started using that is pretty incredible. Amplify Texture is a plug-in for Unity that allows textures to be streamed into your game dynamically. What’s cool is that it only streams the visible parts of a potentially massive virtual texture, which contains all the textures you add to it. Consequently the scene loading time is negligible and you can have one Virtual Texture for each scene, making it virtually unlimited.

A few weeks ago, I was working on puzzles and prototyping some tutorial like gameplay while Mike was working on environmental story ideas. He asked me if I had gotten very far testing Amplify, as we had talked about using it in the past. I hadn’t really tried it out seriously but Mike had realized that a lot of his environmental story ideas would benefit from very detailed textures. For example, someones name carved in a wall or scuff marks, the kinds of things maybe only Sherlock Holmes would notice at first.

There where obviously other reasons to look for a good texture solution, we use tons of light maps and also have dozens of paintings, all of which are severely hurt if they are displayed at at small size. There were also performance reasons, once all the textures, sounds, and 2 million triangle house are put into RAM, there is not a lot of room for high resolution light maps. My previous solution was to load and unload light maps based on them coming into visibility while frustum and occlusion culling were running. This was an adequate solution but messy and not necessarily scalable. We also weren’t getting great performance on older hardware, this was expectable but not ideal.

I knew we could get the game to run fine but didn’t enjoy having to thread this needle, it would be much nicer to just work freely. It’s hard to except that there is now such a good solution but early tests seem to indicate that it is so.

The current version of Amplify doesn’t support light maps, however, because texture memory is not much of an issue, I did a test where I simply baked light and color all into one huge map with no need for repeating textures. I can also bake hi-rez normal maps into this map so the level of detail is pretty stunning so far. As you can see the first tests are pretty promising and I look forward to continue testing this plug-in moving forward. You can learn more about Amplify Texture here.

Screen Shot 2013-07-26 at 11.11.48 PM

Posted in Uncategorized

RedFrame Featured by Unity

poolFor anyone who doesn’t know, we’re building RedFrame using the Unity game engine. Unity is a wonderful tool with many features that make it appealing for indie development, including the ability to deploy to multiple operating systems as well as an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site, and we recommend checking it out if you want to get a little more background on the project. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish with RedFrame. Check out the article!

Posted in Design, Pipeline, Programming

Creating Floor Plan Screenshots

As we craft the puzzle structure for RedFrame, it’s very useful to have a birds-eye view of the environment so that we can better see how puzzles physically relate to one another. I spent some time over the weekend creating a simple Unity editor script that allows me to export two very large screenshots, one for each floor of our house environment. The script creates a new downward-facing camera, sets it to orthographic mode, and adjusts its near and far clips planes to cut out only the vertical section of the house that I’m interested in. It then manipulates the camera’s projection matrix to produce an oblique projection. This oblique perspective makes it much easier to see walls and determine height, and has the fun side effect of making it feel like a top-down RPG.

Rather than capturing an image with Unity’s built-in Application.CaptureScreenshot method, I instead chose to render to a much larger off-screen RenderTexture with a square aspect ratio. This way I can guarantee that the resulting images will always be the same dimensions, regardless of how the Unity editor windows are set up.

I combined the two floor images in Photoshop as separate layers, and gave the top floor a slight drop shadow. I can easily toggle between the top and bottom floor by hiding the top layer. I’ve created additional layers in which I can create diagrams and notes. As the environment evolves, it’ll be very easy to re-run the script in Unity, producing a new pair of screenshots that can be dropped into the same Photoshop file.

You can download my floor plan screenshot script here. It was written very quickly, so if you see room for improvement please let me know!

RedFrame-House-Map

Posted in Design, Pipeline, Programming

Environment Update

A few months ago I finished building and lighting the the RedFrame house environment. Not including bathrooms, the house has 17 furnished rooms, and a couple outdoor areas. The general look has changed a lot since we last showed a demo. I’ve started to use higher contrast in many areas, and the general color scheme of each room has converged into a unified style, making each room feel unique. Here’s a quick tour of some of the areas that convey the main feel of the game.

-Andrew

Posted in Uncategorized

Repurposing Old Systems

It’s always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame’s core gameplay, a lot of our ideas reached a fairly playable state, only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN – that’s for another post) we’ve retained a complete history of our creative and technical output. I’ll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as “portable”, then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It’s a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned – it just didn’t work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we’ll talk more about this in future posts) and we’d lost the feeling of actively participating in this dense world we’d created. To avoid spoilers I won’t reveal the precise nature of the solution we’re currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it… but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it’s not uncommon for older code to have to be thrown away simply because it can’t easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

PortableObject FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can’t be guaranteed that our solution requires us to render a camera view. It’s generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

DropNode FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical – It’s possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological – By using consistent parameters across multiple methods, the programmer’s cognitive load is significantly reduced. When it’s easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene’s GameObject hierarchy aside from the existence of  a camera, and doesn’t require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into. By making the fewest possible assumptions, it’s able to be included in just about any project without having to be modified.

Writing re-usable code can certainly not be easy, but I’ve found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you’re comfortable with writing reusable code you’ll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike

Posted in Design, Programming