RedFrame Library DK2 Demo

For the past five months, medical
Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall
For the past five months, medical
Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scenes. A lot of this is coupled to their out of the box set-up, health system
but it is pretty easy with Unity to write new shaders that take advantage of this new lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit since the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader that takes the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.


For the past five months, medical
Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scenes. A lot of this is coupled to their out of the box set-up, health system
but it is pretty easy with Unity to write new shaders that take advantage of this new lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit since the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader that takes the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.


Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, look
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, cost
I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.
For the past five months, medical
Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scenes. A lot of this is coupled to their out of the box set-up, health system
but it is pretty easy with Unity to write new shaders that take advantage of this new lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit since the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader that takes the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.


Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, look
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, cost
I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, look
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.

 

<iframe src=”https://player.vimeo.com/video/123518418″ width=”500″ height=”281″ frameborder=”0″ webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
For the past five months, medical
Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scenes. A lot of this is coupled to their out of the box set-up, health system
but it is pretty easy with Unity to write new shaders that take advantage of this new lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit since the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader that takes the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.


Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, look
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, cost
I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, look
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.

 

<iframe src=”https://player.vimeo.com/video/123518418″ width=”500″ height=”281″ frameborder=”0″ webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, rubella
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, order I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.


For the past five months, medical
Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scenes. A lot of this is coupled to their out of the box set-up, health system
but it is pretty easy with Unity to write new shaders that take advantage of this new lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit since the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader that takes the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.


Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, look
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, cost
I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, look
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.

 

<iframe src=”https://player.vimeo.com/video/123518418″ width=”500″ height=”281″ frameborder=”0″ webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, rubella
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, order I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.


Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, order
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, more
I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.

<iframe src=”https://player.vimeo.com/video/123518418″ width=”500″ height=”281″ frameborder=”0″ webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
For the past five months, medical
Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scenes. A lot of this is coupled to their out of the box set-up, health system
but it is pretty easy with Unity to write new shaders that take advantage of this new lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit since the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader that takes the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.


Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, look
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, cost
I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, look
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.

 

<iframe src=”https://player.vimeo.com/video/123518418″ width=”500″ height=”281″ frameborder=”0″ webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, rubella
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, order I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.


Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scene. A lot of this is coupled to their out of the box set-up, order
but it is pretty easy in Unity to write new shaders that take advantage of this lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit when the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, more
I wrote a shader uses the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.

<iframe src=”https://player.vimeo.com/video/123518418″ width=”500″ height=”281″ frameborder=”0″ webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scenes. A lot of this is coupled to their out of the box set-up, order but it is pretty easy with Unity to write new shaders that take advantage of this new lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit since the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, treatment
I wrote a shader that takes the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.


Screen Shot 2015-04-01 at 5.15.01 PM

We have had some recent success with the Oculus DK2 drivers and today are releasing a new RedFrame environment demo, noun available for download from the links below. This demo features the library, physiotherapy a key location in RedFrame and a nice companion to the master bedroom that we had released for DK1 last year. This is a slightly smaller environment (we don’t want to give too much away), but it contains some hints about what’s to come. I’ve included some of the new rendering techniques featured in my last post, Experimenting with Unity 5, which look very good in VR.

Before playing, be sure to specify your height and IPD in the Oculus Config application included with the Oculus Rift Runtime. The RedFrame environment is precisely modeled to scale which can magnify discrepancies in your virtual height. We’ve also included a “seated mode” (see controls below) that will approximately match your height when sitting in a desk chair, greatly increasing both immersion and sense of scale.

Download

Windows
Mac

Controls for Keyboard

  • Move – W, A, S, D
  • Turn – Q, E, or mouse
  • Sit – Space or Tab
  • Recenter View – R

Controls for Gamepad

  • Move – Left stick
  • Turn – Right stick or bumpers
  • Sit – A (Xbox), X (PS3/4)
  • Recenter View – Y (Xbox), Triangle (PS3/4)

Troubleshooting

Compared to the Oculus Rift DK1, the setup for DK2 can be a bit more complex. It’s hard to say how well it will run on every system, but we have a few tips that got it working well for us:

  1. On Windows, change your Oculus settings to use “Direct to Rift” mode instead of “Extended” mode, and run the Direct to Rift app included with our demo.
  2. Don’t mirror your display, it causes bad jitter.
  3. Update your graphics drivers after verifying that they’re compatible with your Rift.
  4. If the frame rate doesn’t feel smooth, relaunch the app and select the lower quality setting. The two presets we’ve included should perform well on most computers.
  5. If your screen goes black, it may be because your head passed through an object. This is a feature we added handle collisions with head tracking.
  6. If the screen shows up tiny in the corner, make sure the resolution is set to 1920 x 1080 on launch.
  7. Sometimes with the OSX build, the cursor won’t hide. If this happens you can just drag it to the top of the screen.

Since this is still a work in progress and far from perfect, if you have trouble please let us know about your experience. Your feedback is very helpful!

Posted in Uncategorized

Experimenting with Unity 5

For the past five months, medical
Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall
For the past five months, medical
Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall
Unity 5 has added some really cool lighting and shader features to help artists create more realistic looking scenes. A lot of this is coupled to their out of the box set-up, health system
but it is pretty easy with Unity to write new shaders that take advantage of this new lighting model.

RedFrame has traditionally not made much use of of specular lighting because it required using dynamic lights to add the spec highlights. This slows things down a bit since the scenes have hundreds of thousands of polygons. However it seems like using Unity’s reflection probes is pretty cheap and can help mimic all sorts of real surface types.

As an experiment, I wrote a shader that takes the light map as the diffuse contribution but also has specular and occlusion maps that can interact with box-projected reflection probes. The below video shows the library using this technique on some of the surfaces. There is one dynamic point light in the center of the room to add some more vivid spec highlights, but this is running at a few hundred frames per second with 8x anti-aliasing, so it is a good sign.

Posted in Uncategorized

Progress Update

Screen Shot 2013-12-31 at 4.06.37 PM

It’s been a while since we posted anything about RedFrame – we took a short break to avoid burnout, sildenafil and have been creatively re-energized by focusing on other work for a while. We’re gearing up to do a lot of work on the game in 2014 and will have exciting new things to show you. To kick off the new year we’re releasing our first Oculus Rift demo in which you can experience one small piece of our environment, clinic the master bedroom.

Both Mike and I have Oculus Rift developer kits and have been very excited to see the RedFrame environment in VR. In fact, VR is such a qualitatively different experience that we’re adjusting many of our design decisions to better support it. RedFrame feels like it always was meant to be a VR experience, and the technology has finally arrived to support it!

This demo is intended to provide the general flavor of the experience that we want to create, rather than demonstrating gameplay (there are no puzzles or interaction). We’ve also included a new track from our musician, Notious, who has been creating wonderful compositions for the game. You can check out his other work here.

We’d love to hear what you think!

Download

Windows
Mac

Instructions

The RedFrame Oculus demo is best experienced with a gamepad. We support Xbox, PS3, and PS4 controller on every platform.

Before playing, be sure to specify your player height and IPD in the Oculus Config Util included in the Oculus Rift SDK.

We’ve also included a “sitting mode” that simulates your height while sitting in a desk chair. We’ve found that this greatly improves realism by matching the floor that you see the floor that you feel with your feet.

Controls

  • Left stick or WASD keys to move
  • Right stick or mouse to look
  • Left trigger or shift key to fast walk
  • A (Xbox), X (PS3/4), or tab to toggle sitting mode
  • Return/Enter key to toggle VR display

Screen

It’s been a while since we posted anything about RedFrame – we took a short break to avoid burnout, ampoule and have been creatively re-energized by focusing on other work for a while. We’re gearing up to do a lot of work on the game in 2014 and will have exciting new things to show you. To kick off the new year we’re releasing our first Oculus Rift demo in which you can experience one small piece of our environment, the master bedroom.

Both Mike and I have Oculus Rift developer kits and have been very excited to see the RedFrame environment in VR. In fact, VR is such a qualitatively different experience that we’re adjusting many of our design decisions to better support it. RedFrame feels like it always was meant to be a VR experience, and the technology has finally arrived to support it!

This demo is intended to provide the general flavor of the experience that we want to create, rather than demonstrating gameplay (there are no puzzles or interaction). We’ve also included a new track from our musician, Notious, who has been creating wonderful compositions for the game. You can check out his other work here.

We’d love to hear what you think!

Download

Windows
Mac

Instructions

The RedFrame Oculus demo is best experienced with a gamepad. We support Xbox, PS3, and PS4 controller on every platform.

Before playing, be sure to specify your player height and IPD in the Oculus Config Util included in the Oculus Rift SDK.

We’ve also included a “sitting mode” that simulates your height while sitting in a desk chair. We’ve found that this greatly improves realism by matching the floor that you see the floor that you feel with your feet.

Controls

  • Left stick or WASD keys to move
  • Right stick or mouse to look
  • Left trigger or shift key to fast walk
  • A (Xbox), X (PS3/4), or tab to toggle sitting mode
  • Return/Enter key to toggle VR display

Screen

It’s been a while since we posted anything about RedFrame – we took a short break to avoid burnout, sick and have been creatively re-energized by focusing on other work for a while. We’re gearing up to do a lot of work on the game in 2014 and will have exciting new things to show you. To kick off the new year we’re releasing our first Oculus Rift demo in which you can experience one small piece of our environment, the master bedroom.

Both Mike and I have Oculus Rift developer kits and have been very excited to see the RedFrame environment in VR. In fact, VR is such a qualitatively different experience that we’re adjusting many of our design decisions to better support it. RedFrame feels like it always was meant to be a VR experience, and the technology has finally arrived to support it!

This demo is intended to provide the general flavor of the experience that we want to create, rather than demonstrating gameplay (there are no puzzles or interaction). We’ve also included a new track from our musician, Notious, who has been creating wonderful compositions for the game. You can check out his other work here.

We’d love to hear what you think!

Download

Windows
Mac

Instructions


The RedFrame Oculus demo is best experienced with a gamepad. We support Xbox, PS3, and PS4 controller on every platform.

Before playing, be sure to specify your player height and IPD in the Oculus Config Util included in the Oculus Rift SDK.

We’ve also included a “sitting mode” that simulates your height while sitting in a desk chair. We’ve found that this greatly improves realism by matching the floor that you see the floor that you feel with your feet.

Controls

  • Left stick or WASD keys to move
  • Right stick or mouse to look
  • Left trigger or shift key to fast walk
  • A (Xbox), X (PS3/4), or tab to toggle sitting mode
  • Return/Enter key to toggle VR display

Recently, visit this Mike and I had the pleasure to speaking with Cris Miranda who hosts a podcast entitled EnterVR. We chatted about RedFrame as well as VR in general. It was a lot of fun and we were able to verbalize a lot of things we had been thinking about with the game as well as other projects we’d like to do in the future.

 

Check it out here.
Recently, ampoule Mike and I had the pleasure to speaking with Cris Miranda who hosts a podcast entitled Enter VR. We chatted about RedFrame as well as VR in general. It was a lot of fun and we were able to verbalize a lot of things we had been thinking about with the game as well as other projects we’d like to do in the future.

 

Check it out here.
Recently, ampoule Mike and I had the pleasure to speaking with Cris Miranda who hosts a podcast entitled Enter VR. We chatted about RedFrame as well as VR in general. It was a lot of fun and we were able to verbalize a lot of things we had been thinking about with the game as well as other projects we’d like to do in the future.

 

Check it out here.
For the past five months, pharmacy Mike and I have been carving out a significant portion of our schedule to work on RedFrame. We’ve made great progress on several fronts. Mike has been working on the main code base, web
building a robust infrastructure that is now allowing us to set up puzzles and interactions that previously had been held together by ad-hoc prototype code. The types of interactive elements available in the game are very well known at this point so we’ve been able to front-load this engineering work.

During this same time period, prescription I have migrated the entire house to new, cleaner, Maya files, and in the process have greatly improved much of the texturing and quality of models. I’ve also finally been able to get around to working on an area that I had put off for a long time: the yard. Happily I feel that this is now one of the best areas in the game. I’ve also started work on the other environments outside of the house and am planning them out in broad strokes.

All of this work has been aimed toward building our first demo with interactive puzzles which will continue to grow out into the final game. As we begin winding down some of these time consuming programing and art tasks, I will return to puzzle design and Mike will be freed up to work more on environmental storytelling.

There will be a lot to share with you this year and we’re very excited to show it to you. Thanks for the support and stay tuned!Hall

Posted in Uncategorized

Enter VR Podcast

We haven’t really used or needed a lot of plug-ins for RedFrame thus far, discount however there is one we started using that is pretty incredible. Amplify texturing is a plug-in for Unity that allows textures to be streamed into your game dynamically.

A few weeks ago, malady I was working on puzzles and prototyping some tutorial like gameplay while Mike was working on environmental story ideas. He asked me if I had gotten very far testing Amplify, as we had talked about using it in the past. I hadn’t really tried it out seriously but Mike had realized that a lot of his environmental story ideas would benefit from very detailed textures. For example, someones name carved in a wall or scuff marks, the kinds of things maybe only Sherlock Holmes would notice at first.

There where obviously other reasons to look for a good texture solution, we use tons of light maps and also have dozens of paintings, all of which are severely hurt if they are displayed at at small size. There were also performance reasons, once all the textures and sounds and 2 million triangle house are put into RAM, there is not a lot of room for high resolution light maps. My previous solution was to load and unload light maps based on them coming into visibility while frustum and occlusion culling were running. This was an adequate solution but messy and not necessarily scaleable. We also weren’t getting great performance on older hardware, this was expectable but not ideal.

I knew we could get the game to run fine but didn’t enjoy having to thread this needle, it would be much nicer to just work freely. It’s hard to except that there is now such a good solution but early tests seem to indicate that it is so.

Currently the version of amplify that is out doesn’t support light maps, however, because texture memory is not much of an issue, I did a test where I simply baked light and color all into one huge map with no need for repeating textures. I can also bake hi-rez normal maps into these map so the level of detail is pretty stunning so far. As you can see the first tests are pretty promising and I look forward to continue testing this plug-in moving forward.
Screen Shot 2013-07-26 at 11.14.46 PM

We haven’t really used or needed a lot of plug-ins for RedFrame thus far, ed however there is one we started using that is pretty incredible. Amplify texturing is a plug-in for Unity that allows textures to be streamed into your game dynamically.

A few weeks ago, I was working on puzzles and prototyping some tutorial like gameplay while Mike was working on environmental story ideas. He asked me if I had gotten very far testing Amplify, as we had talked about using it in the past. I hadn’t really tried it out seriously but Mike had realized that a lot of his environmental story ideas would benefit from very detailed textures. For example, someones name carved in a wall or scuff marks, the kinds of things maybe only Sherlock Holmes would notice at first.

There where obviously other reasons to look for a good texture solution, we use tons of light maps and also have dozens of paintings, all of which are severely hurt if they are displayed at at small size. There were also performance reasons, once all the textures and sounds and 2 million triangle house are put into RAM, there is not a lot of room for high resolution light maps. My previous solution was to load and unload light maps based on them coming into visibility while frustum and occlusion culling were running. This was an adequate solution but messy and not necessarily scaleable. We also weren’t getting great performance on older hardware, this was expectable but not ideal.

I knew we could get the game to run fine but didn’t enjoy having to thread this needle, it would be much nicer to just work freely. It’s hard to except that there is now such a good solution but early tests seem to indicate that it is so.

Currently the version of amplify that is out doesn’t support light maps, however, because texture memory is not much of an issue, I did a test where I simply baked light and color all into one huge map with no need for repeating textures. I can also bake hi-rez normal maps into these map so the level of detail is pretty stunning so far. As you can see the first tests are pretty promising and I look forward to continue testing this plug-in moving forward.Screen Shot 2013-07-26 at 11.11.48 PM
Screen Shot 2013-07-26 at 11.14.46 PMWe haven’t really used or needed a lot of plug-ins for RedFrame thus far, viagra however there is one we started using that is pretty incredible. Amplify texturing is a plug-in for Unity that allows textures to be streamed into your game dynamically.

A few weeks ago, this I was working on puzzles and prototyping some tutorial like gameplay while Mike was working on environmental story ideas. He asked me if I had gotten very far testing Amplify, pilule as we had talked about using it in the past. I hadn’t really tried it out seriously but Mike had realized that a lot of his environmental story ideas would benefit from very detailed textures. For example, someones name carved in a wall or scuff marks, the kinds of things maybe only Sherlock Holmes would notice at first.

There where obviously other reasons to look for a good texture solution, we use tons of light maps and also have dozens of paintings, all of which are severely hurt if they are displayed at at small size. There were also performance reasons, once all the textures and sounds and 2 million triangle house are put into RAM, there is not a lot of room for high resolution light maps. My previous solution was to load and unload light maps based on them coming into visibility while frustum and occlusion culling were running. This was an adequate solution but messy and not necessarily scaleable. We also weren’t getting great performance on older hardware, this was expectable but not ideal.

I knew we could get the game to run fine but didn’t enjoy having to thread this needle, it would be much nicer to just work freely. It’s hard to except that there is now such a good solution but early tests seem to indicate that it is so.

Currently the version of amplify that is out doesn’t support light maps, however, because texture memory is not much of an issue, I did a test where I simply baked light and color all into one huge map with no need for repeating textures. I can also bake hi-rez normal maps into these map so the level of detail is pretty stunning so far. As you can see the first tests are pretty promising and I look forward to continue testing this plug-in moving forward. You can learn more about Amplify Texturing here.Screen Shot 2013-07-26 at 11.14.54 PM
Screen Shot 2013-07-26 at 11.14.46 PMWe haven’t really used or needed a lot of plug-ins for RedFrame thus far, try however there is one we started using that is pretty incredible. Amplify texturing is a plug-in for Unity that allows textures to be streamed into your game dynamically.

 

A few weeks ago, I was working on puzzles and prototyping some tutorial like gameplay while Mike was working on environmental story ideas. He asked me if I had gotten very far testing Amplify, as we had talked about using it in the past. I hadn’t really tried it out seriously but Mike had realized that a lot of his environmental story ideas would benefit from very detailed textures. For example, someones name carved in a wall or scuff marks, the kinds of things maybe only Sherlock Holmes would notice at first.

 

There where obviously other reasons to look for a good texture solution, we use tons of light maps and also have dozens of paintings, all of which are severely hurt if they are displayed at at small size. There were also performance reasons, once all the textures and sounds and 2 million triangle house are put into RAM, there is not a lot of room for high resolution light maps. My previous solution was to load and unload light maps based on them coming into visibility while frustum and occlusion culling were running. This was an adequate solution but messy and not necessarily scaleable. We also weren’t getting great performance on older hardware, this was expectable but not ideal.

 

I knew we could get the game to run fine but didn’t enjoy having to thread this needle, it would be much nicer to just work freely. It’s hard to except that there is now such a good solution but early tests seem to indicate that it is so.

 

Currently the version of amplify that is out doesn’t support light maps, however, because texture memory is not much of an issue, I did a test where I simply baked light and color all into one huge map with no need for repeating textures. I can also bake hi-rez normal maps into these map so the level of detail is pretty stunning so far. As you can see the first tests are pretty promising and I look forward to continue testing this plug-in moving forward. You can learn more about Amplify Texturing here.Screen Shot 2013-07-26 at 11.14.54 PM
Recently, youth health Mike and I had the pleasure to speaking with Cris Miranda who hosts a podcast entitled Enter VR. We chatted about RedFrame as well as VR in general. It was a lot of fun and we were able to verbalize a lot of things we had been thinking about with the game as well as other projects we’d like to do in the future.

 

Check it out here.

Posted in Uncategorized

RedFrame Oculus Rift Demo!

Screen Shot 2013-07-26 at 11.14.46 PMWe haven’t really used or needed a lot of plug-ins for RedFrame thus far, illness allergist however there is one we started using that is pretty incredible. Amplify Texture is a plug-in for Unity that allows textures to be streamed into your game dynamically. What’s cool is that it only streams the visible parts of a potentially massive virtual texture, visit which contains all the textures you add to it. Consequently the scene loading time is negligible and you can have one Virtual Texture for each scene, tadalafil making it virtually unlimited.

A few weeks ago, I was working on puzzles and prototyping some tutorial like gameplay while Mike was working on environmental story ideas. He asked me if I had gotten very far testing Amplify, as we had talked about using it in the past. I hadn’t really tried it out seriously but Mike had realized that a lot of his environmental story ideas would benefit from very detailed textures. For example, someones name carved in a wall or scuff marks, the kinds of things maybe only Sherlock Holmes would notice at first.

There where obviously other reasons to look for a good texture solution, we use tons of light maps and also have dozens of paintings, all of which are severely hurt if they are displayed at at small size. There were also performance reasons, once all the textures, sounds, and 2 million triangle house are put into RAM, there is not a lot of room for high resolution light maps. My previous solution was to load and unload light maps based on them coming into visibility while frustum and occlusion culling were running. This was an adequate solution but messy and not necessarily scalable. We also weren’t getting great performance on older hardware, this was expectable but not ideal.

I knew we could get the game to run fine but didn’t enjoy having to thread this needle, it would be much nicer to just work freely. It’s hard to except that there is now such a good solution but early tests seem to indicate that it is so.

The current version of Amplify doesn’t support light maps, however, because texture memory is not much of an issue, I did a test where I simply baked light and color all into one huge map with no need for repeating textures. I can also bake hi-rez normal maps into this map so the level of detail is pretty stunning so far. As you can see the first tests are pretty promising and I look forward to continue testing this plug-in moving forward. You can learn more about Amplify Texture here.

Screen Shot 2013-07-26 at 11.11.48 PM
Screen Shot 2013-07-26 at 11.14.46 PMWe haven’t really used or needed a lot of plug-ins for RedFrame thus far, viagra 100mg however there is one we started using that is pretty incredible. Amplify Texture is a plug-in for Unity that allows textures to be streamed into your game dynamically. What’s cool is that it only streams the visible parts of a potentially massive virtual texture, drugs which contains all the textures you add to it. Consequently the scene loading time is negligible and you can have one Virtual Texture for each scene, making it virtually unlimited.

A few weeks ago, I was working on puzzles and prototyping some tutorial like gameplay while Mike was working on environmental story ideas. He asked me if I had gotten very far testing Amplify, as we had talked about using it in the past. I hadn’t really tried it out seriously but Mike had realized that a lot of his environmental story ideas would benefit from very detailed textures. For example, someones name carved in a wall or scuff marks, the kinds of things maybe only Sherlock Holmes would notice at first.

There where obviously other reasons to look for a good texture solution, we use tons of light maps and also have dozens of paintings, all of which are severely hurt if they are displayed at at small size. There were also performance reasons, once all the textures, sounds, and 2 million triangle house are put into RAM, there is not a lot of room for high resolution light maps. My previous solution was to load and unload light maps based on them coming into visibility while frustum and occlusion culling were running. This was an adequate solution but messy and not necessarily scalable. We also weren’t getting great performance on older hardware, this was expectable but not ideal.

I knew we could get the game to run fine but didn’t enjoy having to thread this needle, it would be much nicer to just work freely. It’s hard to except that there is now such a good solution but early tests seem to indicate that it is so.

The current version of Amplify doesn’t support light maps, however, because texture memory is not much of an issue, I did a test where I simply baked light and color all into one huge map with no need for repeating textures. I can also bake hi-rez normal maps into this map so the level of detail is pretty stunning so far. As you can see the first tests are pretty promising and I look forward to continue testing this plug-in moving forward. You can learn more about Amplify Texture here.

Screen Shot 2013-07-26 at 11.11.48 PM
Screen

It’s been a while since we posted anything about RedFrame – we took a short break to avoid burnout, price and have been creatively re-energized by focusing on other work for a while. We’re gearing up to do a lot of work on the game in 2014 and will have exciting new things to show you. To kick off the new year we’re releasing our first Oculus Rift demo in which you can experience one small piece of our environment, the master bedroom.

Both Mike and I have Oculus Rift developer kits and have been very excited to see the RedFrame environment in VR. In fact, VR is such a qualitatively different experience that we’re adjusting many of our design decisions to better support it. RedFrame feels like it always was meant to be a VR experience, and the technology has finally arrived to support it!

This demo is intended to provide the general flavor of the experience that we want to create, rather than demonstrating gameplay (there are no puzzles or interaction). We’ve also included a new track from our musician, Notious, who has been creating wonderful compositions for the game. You can check out his other work here.

We’d love to hear what you think!

Download

Windows
Mac

Instructions

The RedFrame Oculus demo is best experienced with a gamepad. We support Xbox, PS3, and PS4 controller on every platform.

Before playing, be sure to specify your player height and IPD in the Oculus Config Util included in the Oculus Rift SDK.

We’ve also included a “sitting mode” that simulates your height while sitting in a desk chair. We’ve found that this greatly improves realism by matching the floor that you see the floor that you feel with your feet.

Controls

  • Left stick or WASD keys to move
  • Right stick or mouse to look
  • Left trigger or shift key to fast walk
  • A (Xbox), X (PS3/4), or tab to toggle sitting mode
  • Return/Enter key to toggle VR display
Posted in Uncategorized

Amplify Texture Plug-In for Unity

For anyone who doesn’t know, dosage we are using the Unity3d Game engine to build RedFrame. Unity is a wonderful engine with many features that make it appealing for indie development, ambulance including its ability to deploy to multiple operating systems and an amazingly simple asset pipeline. The folks at Unity were kind enough to feature our game on their site and we recommend checking it out if you want to get a little more background on the game. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish. Check out the article!
For anyone who doesn’t know, web we are using the Unity3d Game Engine to build RedFrame. Unity is a wonderful engine with many features that make it appealing for indie development, viagra order including its ability to deploy to multiple operating systems and an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site and we recommend checking it out if you want to get a little more background on the game. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish. Check out the article!
pool

For anyone who doesn’t know, read more we are using the Unity3d Game engine to build RedFrame. Unity is a wonderful engine with many features that make it appealing for indie development, link including its ability to deploy to multiple operating systems and an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site and we recommend checking it out if you want to get a little more background on the game. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish. Check out the article!
poolFor anyone who doesn’t know, health we are using the Unity3d Game Engine to build RedFrame. Unity is a wonderful engine with many features that make it appealing for indie development, check including its ability to deploy to multiple operating systems and an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site and we recommend checking it out if you want to get a little more background on the game. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish. Check out the article!
poolFor anyone who doesn’t know, myocarditis we are using the Unity3d Game Engine to build RedFrame. Unity is a wonderful tool with many features that make it appealing for indie development, ailment including the ability to deploy to multiple operating systems as well as an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site, impotent and we recommend checking it out if you want to get a little more background on the project. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish. Check out the article!
poolFor anyone who doesn’t know, symptoms we are using the Unity3d Game Engine to build RedFrame. Unity is a wonderful tool with many features that make it appealing for indie development, sale including the ability to deploy to multiple operating systems as well as an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site, and we recommend checking it out if you want to get a little more background on the project. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish. Check out the article!
poolFor anyone who doesn’t know, this site we are using the Unity3d Game Engine to build RedFrame. Unity is a wonderful tool with many features that make it appealing for indie development, pfizer including the ability to deploy to multiple operating systems as well as an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site, valeologist and we recommend checking it out if you want to get a little more background on the project. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish. Check out the article!
poolFor anyone who doesn’t know, audiologist we’re building RedFrame using the Unity game engine. Unity is a wonderful tool with many features that make it appealing for indie development, look including the ability to deploy to multiple operating systems as well as an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site, and we recommend checking it out if you want to get a little more background on the project. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish. Check out the article!
poolFor anyone who doesn’t know, this we are using the Unity3d Game Engine to build RedFrame. Unity is a wonderful tool with many features that make it appealing for indie development, geriatrician including the ability to deploy to multiple operating systems as well as an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site, healing and we recommend checking it out if you want to get a little more background on the project. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish. Check out the article!
poolFor anyone who doesn’t know, one health we are using the Unity Game Engine to build RedFrame. Unity is a wonderful tool with many features that make it appealing for indie development, decease including the ability to deploy to multiple operating systems as well as an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site, recuperation and we recommend checking it out if you want to get a little more background on the project. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish. Check out the article!
Screen Shot 2013-07-26 at 11.14.46 PMWe haven’t really used or needed a lot of plug-ins for RedFrame thus far, public health however there is one we started using that is pretty incredible. Amplify Texture is a plug-in for Unity that allows textures to be streamed into your game dynamically. What’s cool is that it only streams the visible parts of a potentially massive virtual texture, ampoule which contains all the textures you add to it. Consequently the scene loading time is negligible and you can have one Virtual Texture for each scene, more making it virtually unlimited.

A few weeks ago, I was working on puzzles and prototyping some tutorial like gameplay while Mike was working on environmental story ideas. He asked me if I had gotten very far testing Amplify, as we had talked about using it in the past. I hadn’t really tried it out seriously but Mike had realized that a lot of his environmental story ideas would benefit from very detailed textures. For example, someones name carved in a wall or scuff marks, the kinds of things maybe only Sherlock Holmes would notice at first.

There where obviously other reasons to look for a good texture solution, we use tons of light maps and also have dozens of paintings, all of which are severely hurt if they are displayed at at small size. There were also performance reasons, once all the textures, sounds, and 2 million triangle house are put into RAM, there is not a lot of room for high resolution light maps. My previous solution was to load and unload light maps based on them coming into visibility while frustum and occlusion culling were running. This was an adequate solution but messy and not necessarily scalable. We also weren’t getting great performance on older hardware, this was expectable but not ideal.

I knew we could get the game to run fine but didn’t enjoy having to thread this needle, it would be much nicer to just work freely. It’s hard to except that there is now such a good solution but early tests seem to indicate that it is so.

The current version of Amplify doesn’t support light maps, however, because texture memory is not much of an issue, I did a test where I simply baked light and color all into one huge map with no need for repeating textures. I can also bake hi-rez normal maps into this map so the level of detail is pretty stunning so far. As you can see the first tests are pretty promising and I look forward to continue testing this plug-in moving forward. You can learn more about Amplify Texture here.

Screen Shot 2013-07-26 at 11.11.48 PM

Posted in Uncategorized

RedFrame Featured by Unity

I recently published a tool for Unity that exposes additional settings for Unity:

https://github.com/mstevenson/Lightmapping-Extended
The central environment in RedFrame is a large mansion. While developing the 3d model of the house I didn’t pay much attention to its total resolution; I wanted to see how far I could push mid-range hardware and didn’t want the design of the environment to be influenced by technical considerations. To our delight the house runs completely smoothly on an ATI Radeon HD 5770 with a gig of video memory. Although this video card is no slouch, erectile it’s also not a high-end gaming GPU.

The resolution of the house model was originally 1, physician 371, food 298 vertices. We’re going to expand the environment quite a bit and will need to keep the game playable on as many systems as possible, so I’ve started the process of reducing the resolution of the Maya model as much as possible without negatively affecting the way it’s perceived by the player. I realized that a lot of our detail was unnecessary; some of the tight detail even detracted from the image by causing flickering when anti-aliasing was disabled.

The scene is quite large, so prioritizing my time is a little difficult. My first thought was just to go through each room looking for objects that are more detailed that they need to be, but this is somewhat arbitrary. My second technique has been to print a list of all objects in the scene and then order them by how much resolution they have. It is still arbitrary in a sense, but it has been a nice weapon with which to attack the problem.

Because I’m more comfortable programing in Unity than in MEL, I wrote a C# script to sort models by resolution. It’s my first time using Linq, which I still need to wrap my head around. You can download the script here – just import your model into Unity, drop it into a new scene, and attach the script the model’s root game object.


While investigating potential lightmapping solutions for RedFrame, advice we explored Unity’s own lightmapping system which leverages Autodesk’s Beast lightmapping middleware. Beast unfortunately is lacking a few more obscure features that we use quite heavily in Mental Ray to simulate realistic artificial indoor lighting, mind most notably photometric lights for reconstructing the unique banding patterns indicative of incandescent bulbs in housings (we’ll discuss this in-depth in a future post). Moreover, Unity’s specific implementation of Beast favors simplicity over customization and is lacking some very useful features.

Unity fortunately is able to accept Beast XML configuration files. There are a plethora of options available, but only a limited number of are documented by Unity. After a bit of digital archaeology I managed to unearth some reference documentation that revealed all public configuration options supported by Beast.

I’ve created a Unity editor tool called Lightmapping Extended  that implements the full Beast XML specification and presents the available options in a user-friendly UI. I’ve released the code on GitHub, and will soon create a package for the Unity Asset Store:

https://github.com/mstevenson/Lightmapping-Extended

Lightmapping Extended unlocks a few key hidden features not available in Unity’s built-in Lightmapping settings:

  • Image-Based Lighting – light the scene with an HDR skybox, mimicking realistic outdoor lighting
  • Path Tracer GI – a fast, multi-bounce lighting solution
  • Monte Carlo GI – a very slow but extremely accurate lighting solution

Keep an eye on the Lightmapping Extended thread on the Unity forum for future updates. If you have any issues, please let me know either in the comments of this post or in the Unity forum thread.

– Mike
While investigating potential lightmapping solutions for RedFrame, discount we explored Unity’s own lightmapping system which leverages Autodesk’s Beast. Beast unfortunately is lacking a few more obscure features that we use quite heavily in Mental Ray to simulate realistic artificial indoor lighting, surgeon most notably photometric lights for reconstructing the unique banding patterns indicative of incandescent bulbs in housings. We’re not able to make a complete switch (we’ll discuss this in-depth in a future post). Moreover, price Unity’s specific implementation of Beast favors simplicity over customization and is lacking some very useful features.

Unity fortunately is able to accept Beast XML configuration files. There are a plethora of options available, but only a limited number of are documented by Unity. After a bit of digital archaeology I managed to unearth some reference documentation that revealed all public configuration options supported by Beast.

I’ve created a Unity editor tool called Lightmapping Extended  that implements the full Beast XML specification and presents the available options in a user-friendly UI. I’ve released the code on GitHub, and will soon create a package for the Unity Asset Store:

https://github.com/mstevenson/Lightmapping-Extended

Lightmapping Extended unlocks a few key hidden features not available in Unity’s built-in Lightmapping settings:

  • Image-Based Lighting – light the scene with an HDR skybox, mimicking realistic outdoor lighting
  • Path Tracer GI – a fast, multi-bounce lighting solution
  • Monte Carlo GI – a very slow but extremely accurate lighting solution

Keep an eye on the Lightmapping Extended thread on the Unity forum for future updates. If you have any issues, please let me know either in the comments of this post or in the Unity forum thread.

– Mike
While investigating potential lightmapping solutions for RedFrame, pharm we explored Unity’s own lightmapping system which leverages Autodesk’s Beast. Beast unfortunately is lacking a few more obscure features useful for simulating realistic artificial indoor lighting, viagra 40mg most notably photometric lights for reconstructing the unique banding patterns indicative of incandescent bulbs installed in housings. This prevents us from completely switching our workflow from Mental Ray to Beast, this though we’ll likely still use Beast for specific locations in the game that are favorable to Beast’s feature set.

Beast is a quite a full-featured lightmapping solution in itself, however Unity’s specific implementation of the tool favors simplicity over customization. Some very useful features are hidden away, and it’s not immediately obvious how to enable them. To give Beast a fair evaluation, I needed to enable these features.

Unity fortunately is able to accept Beast XML configuration files, opening up nearly the full potential of the system. There are a plethora of additional options recognized by Beast, but only a limited number are documented by Unity. After a bit of digital archaeology I was able to unearth documents that revealed the missing parts of the API.

I’ve created a Unity editor tool called Lightmapping Extended  that implements the full Beast XML specification and presents all available (and compatible) options in a user-friendly UI. I’ve released the code on GitHub, and will soon build a package for the Unity Asset Store:

Lightmapping Extended on GitHub

This tool unlocks a few key hidden features not available in Unity’s built-in Lightmapping settings window:

  • Image-Based Lighting – light a scene with an HDR skybox, mimicking realistic outdoor lighting
  • Path Tracer GI – a fast, multi-bounce lighting solution
  • Monte Carlo GI – a very slow but extremely accurate lighting solution

Keep an eye on the Lightmapping Extended thread on the Unity forum for future updates. If you run into any issues, please let me know either through the blog comments or the Unity forum thread. I’d like to make this the best and most complete solution for lightmapping inside of Unity.

– Mike
While investigating potential lightmapping solutions for RedFrame, pharm we explored Unity’s own lightmapping system which leverages Autodesk’s Beast. Beast unfortunately is lacking a few more obscure features useful for simulating realistic artificial indoor lighting, viagra 40mg most notably photometric lights for reconstructing the unique banding patterns indicative of incandescent bulbs installed in housings. This prevents us from completely switching our workflow from Mental Ray to Beast, this though we’ll likely still use Beast for specific locations in the game that are favorable to Beast’s feature set.

Beast is a quite a full-featured lightmapping solution in itself, however Unity’s specific implementation of the tool favors simplicity over customization. Some very useful features are hidden away, and it’s not immediately obvious how to enable them. To give Beast a fair evaluation, I needed to enable these features.

Unity fortunately is able to accept Beast XML configuration files, opening up nearly the full potential of the system. There are a plethora of additional options recognized by Beast, but only a limited number are documented by Unity. After a bit of digital archaeology I was able to unearth documents that revealed the missing parts of the API.

I’ve created a Unity editor tool called Lightmapping Extended  that implements the full Beast XML specification and presents all available (and compatible) options in a user-friendly UI. I’ve released the code on GitHub, and will soon build a package for the Unity Asset Store:

Lightmapping Extended on GitHub

This tool unlocks a few key hidden features not available in Unity’s built-in Lightmapping settings window:

  • Image-Based Lighting – light a scene with an HDR skybox, mimicking realistic outdoor lighting
  • Path Tracer GI – a fast, multi-bounce lighting solution
  • Monte Carlo GI – a very slow but extremely accurate lighting solution

Keep an eye on the Lightmapping Extended thread on the Unity forum for future updates. If you run into any issues, please let me know either through the blog comments or the Unity forum thread. I’d like to make this the best and most complete solution for lightmapping inside of Unity.

– Mike
One of the very first things we programmed on RedFrame was a player controller, skincare
the code that governs the way the player looks and walks.

Testing other engines:

Dear Esther
While investigating potential lightmapping solutions for RedFrame, pharm we explored Unity’s own lightmapping system which leverages Autodesk’s Beast. Beast unfortunately is lacking a few more obscure features useful for simulating realistic artificial indoor lighting, viagra 40mg most notably photometric lights for reconstructing the unique banding patterns indicative of incandescent bulbs installed in housings. This prevents us from completely switching our workflow from Mental Ray to Beast, this though we’ll likely still use Beast for specific locations in the game that are favorable to Beast’s feature set.

Beast is a quite a full-featured lightmapping solution in itself, however Unity’s specific implementation of the tool favors simplicity over customization. Some very useful features are hidden away, and it’s not immediately obvious how to enable them. To give Beast a fair evaluation, I needed to enable these features.

Unity fortunately is able to accept Beast XML configuration files, opening up nearly the full potential of the system. There are a plethora of additional options recognized by Beast, but only a limited number are documented by Unity. After a bit of digital archaeology I was able to unearth documents that revealed the missing parts of the API.

I’ve created a Unity editor tool called Lightmapping Extended  that implements the full Beast XML specification and presents all available (and compatible) options in a user-friendly UI. I’ve released the code on GitHub, and will soon build a package for the Unity Asset Store:

Lightmapping Extended on GitHub

This tool unlocks a few key hidden features not available in Unity’s built-in Lightmapping settings window:

  • Image-Based Lighting – light a scene with an HDR skybox, mimicking realistic outdoor lighting
  • Path Tracer GI – a fast, multi-bounce lighting solution
  • Monte Carlo GI – a very slow but extremely accurate lighting solution

Keep an eye on the Lightmapping Extended thread on the Unity forum for future updates. If you run into any issues, please let me know either through the blog comments or the Unity forum thread. I’d like to make this the best and most complete solution for lightmapping inside of Unity.

– Mike
One of the very first things we programmed on RedFrame was a player controller, skincare
the code that governs the way the player looks and walks.

Testing other engines:

Dear Esther
One of the very first things we programmed on RedFrame was a player controller, cough
the code that governs the way the player looks and walks.

Testing other engines:

Portal
Dear Esther
Far Cry 2
Far Cry 3

Using a spring system, view
benefits and drawbacks, prostate vs Smooth damping. Ease-out when hitting a wall.

Responsiveness vs floatiness.

Normalizing small movements while making large movements feel exact.

No need for precise aiming, so that is more forgiving when designing our system.

Need a ncie ac
One of the very first things we programmed on RedFrame was a player controller, practitioner the code that governs the way the player looks and walks.

Testing other engines:

Portal
Dear Esther
Far Cry 2
Far Cry 3

Using a spring system, benefits and drawbacks, vs Smooth damping. Ease-out when hitting a wall.

Responsiveness vs floatiness.

Normalizing small movements while making large movements feel exact.

No need for precise aiming, so that is more forgiving when designing our system.

Need a nice acceleration curve, but shouldn’t be able to spin around infinitely. Issue with spring system finding the shortest path and snapping backward.
Global state and behavior can be a bit tricky to handle in Unity. RedFrame includes a few low-level systems that must always be accessible, mind so a robust solution is required. While there is no single solution to the problem, prosthesis there is one particular approach that I’ve found most elegant. There are many reasons one might need global state: controlling menu logic, building additional engine code on top of Unity, executing coroutines that control simulations across level loads, and so on. By design, all code executed in Unity at runtime must be attached to GameObjects as script components, and GameObjects must exist in the hierarchy of a scene. There is no concept of low-level application code outside of the core Unity engine – there are only objects and their individual behaviors. The most common approach to implementing global managers in Unity is to create a prefab that has all manager scripts attached to it. You may have a music manager, an input manager, and dozens of other manager-like scripts stapled onto a single monolithic “GameManager” object. This prefab object would be included in the scene hierarchy in one of two ways:

  • Include the prefab in all scene files.
  • Include the prefab in the first scene, and call its DontDestroyOnLoad method during Awake, forcing it to survive future level loads.

Other scripts would then find references to these manager scripts during Start through one of a variety of built-in Unity methods, most notably FindWithTag and FindObjectOfType. You’d either find the game manager object in the scene and then drill down into its components to find individual manager scripts, or you’d scrape the entire scene to find manager scripts directly. A slightly more automated and potentially more performant option is to use singletons.

Singleton Pattern

The singleton design pattern facilitates global access to an object while ensuring that only one instance of the object ever exists at any one time. If an instance of the singleton doesn’t exist when it is referenced, it will be instantiated on demand. For most C# applications, this is fairly straightforward to implement. In the following code, the static Instance property may be used to access the global instance of the Singleton class:

C# Singleton

public class Singleton
{
static Singleton instance;

public static Singleton Instance {
get {
if (instance == null) {
instance = new Singleton ();
}
return instance;
}
}
}

Unity unfortunately adds some complication to this approach. All executable code must be attached to GameObjects, so not only must an instance of a singleton object always exist, but it must also exist someplace in the scene. The following Unity singleton implementation will ensure that the script is instantiated in the scene:

Unity Singleton

public class UnitySingleton : MonoBehaviour
{
static UnitySingleton instance;

public static UnitySingleton Instance {
get {
if (instance == null) {
instance = FindObjectOfType&lt;UnitySingleton&gt; ();
if (instance == null) {
GameObject obj = new GameObject ();
obj.hideFlags = HideFlags.HideAndDontSave;
instance = obj.AddComponent&lt;UnitySingleton&gt; ();
}
}
return instance;
}
}
}

The above implementation first searches for an instance of the UnitySingleton component in the scene if a reference doesn’t already exist. If it doesn’t find a UnitySingleton component, a hidden GameObject is created and a UnitySingleton component is attached to it. In the event that the UnitySingleton component or its parent GameObject is destroyed, the next call to UnitySingleton.Instance will instantiate a new GameObject and UnitySingleton component. For games that include many manager scripts, it can be a pain to copy and paste this boilerplate code into each new class. By leveraging C#’s support for generic classes, we can create a generic base class for all GameObject-based singletons to inherit from:

Generic Unity Singleton

public class UnitySingleton : MonoBehaviour
where T : Component
{
private static T instance;
public static T Instance {
get {
if (instance == null) {
instance = FindObjectOfType&lt;T&gt; ();
if (instance == null) {
GameObject obj = new GameObject ();
obj.hideFlags = HideFlags.HideAndDontSave;
instance = obj.AddComponent&lt;T&gt; ();
}
}
return instance;
}
}
}

A base class is generally unable to know about any of its sub-classes. This is very problematic when inheriting from a singleton base class – for the sake of example lets call one such sub-class “Manager“. The value of Manager.Instance would be a UnitySingleton object instead of its own sub-type, effectively hiding all of Manager‘s public members. By converting UnitySingleton to a generic class as seen above, we are able to change an inheriting class’s Instance from the base type to the inheriting type. When we declare our Manager class, we must pass its own type to UnityManager<T> as a generic parameter: public class Manager : UnitySingleton<Manager>. That’s it! Simply by inheriting from this special singleton class, we’ve turned Manager into a singleton. There is one remaining issue: persistence. As soon as a new scene is loaded, all singleton objects are destroyed. If these objects are responsible for maintaining state, that state will be lost. While a non-persistent Unity singleton works just fine in many cases, we need to have one additional singleton class in our toolbox:

Persistent Generic Unity Singleton

public class UnitySingletonPersistent : MonoBehaviour
where T : Component
{
private static T instance;
public static T Instance {
get {
if (instance == null) {
instance = FindObjectOfType&lt;T&gt; ();
if (instance == null) {
GameObject obj = new GameObject ();
obj.hideFlags = HideFlags.HideAndDontSave;
instance = obj.AddComponent&lt;T&gt; ();
}
}
return instance;
}
}

public virtual void Awake ()
{
DontDestroyOnLoad (this.gameObject);
if (instance == null) {
instance = this as T;
} else {
Destroy (gameObject);
}
}
}

The preceding code will create an object that persists between levels. Duplicate copies may be instantiated if the singleton had been embedded in multiple scenes, so this code will also destroy any additional copies it finds.

Caveats

There are a few important issues to be aware of with this approach to creating singletons in Unity:

Leaking Singleton Objects

If a MonoBehaviour references a singleton during its OnDestroy or OnDisable while running in the editor, the singleton object that was instantiated at runtime will leak into the scene when playback is stopped. OnDestroy and OnDisable are called by Unity when cleaning up the scene in an attempt to return the scene to its pre-playmode state. If a singleton object is destroyed before another scripts references it through its Instance property, the singleton object will be re-instantiated after Unity expected it to have been permanently destroyed. Unity will warn you of this in very clear language, so keep an eye out for it. One possible solution is to set a boolean flag during OnApplicationQuit that is used to conditionally bypass all singleton references included in OnDestroy and OnDisable.

Execution Order

The order in which objects have their Awake and Start methods called is not predictable by default. Persistent singletons are especially susceptible to execution ordering issues. If multiple copies of a singleton exist in the scene, one may destroy the other copies after those copies have had their Awake methods called. If game state is changed during Awake, this may cause unexpected behavior. As a general rule, Awake should only ever be used to set up the internal state of an object. Any external object communication should occur during Start. Persistent singletons require strict use of this convention.

Conclusion

While singletons are inherently awkward to implement in Unity, they’re often a necessary component of a complex game. Some games may require many dozens of manager scripts, so it makes sense to reduce the amount of duplicated code and standardize on a method for setting up, referencing, and tearing down these managers. A generic singleton base class is one such solution that has served us well, but it is by no means perfect. It is a design pattern that we will continue to iterate on, hopefully discovering solutions that more cleanly integrate with Unity.
Global state and behavior can be a bit tricky to handle in Unity. RedFrame includes a few low-level systems that must always be accessible, mind so a robust solution is required. While there is no single solution to the problem, prosthesis there is one particular approach that I’ve found most elegant. There are many reasons one might need global state: controlling menu logic, building additional engine code on top of Unity, executing coroutines that control simulations across level loads, and so on. By design, all code executed in Unity at runtime must be attached to GameObjects as script components, and GameObjects must exist in the hierarchy of a scene. There is no concept of low-level application code outside of the core Unity engine – there are only objects and their individual behaviors. The most common approach to implementing global managers in Unity is to create a prefab that has all manager scripts attached to it. You may have a music manager, an input manager, and dozens of other manager-like scripts stapled onto a single monolithic “GameManager” object. This prefab object would be included in the scene hierarchy in one of two ways:

  • Include the prefab in all scene files.
  • Include the prefab in the first scene, and call its DontDestroyOnLoad method during Awake, forcing it to survive future level loads.

Other scripts would then find references to these manager scripts during Start through one of a variety of built-in Unity methods, most notably FindWithTag and FindObjectOfType. You’d either find the game manager object in the scene and then drill down into its components to find individual manager scripts, or you’d scrape the entire scene to find manager scripts directly. A slightly more automated and potentially more performant option is to use singletons.

Singleton Pattern

The singleton design pattern facilitates global access to an object while ensuring that only one instance of the object ever exists at any one time. If an instance of the singleton doesn’t exist when it is referenced, it will be instantiated on demand. For most C# applications, this is fairly straightforward to implement. In the following code, the static Instance property may be used to access the global instance of the Singleton class:

C# Singleton

public class Singleton
{
static Singleton instance;

public static Singleton Instance {
get {
if (instance == null) {
instance = new Singleton ();
}
return instance;
}
}
}

Unity unfortunately adds some complication to this approach. All executable code must be attached to GameObjects, so not only must an instance of a singleton object always exist, but it must also exist someplace in the scene. The following Unity singleton implementation will ensure that the script is instantiated in the scene:

Unity Singleton

public class UnitySingleton : MonoBehaviour
{
static UnitySingleton instance;

public static UnitySingleton Instance {
get {
if (instance == null) {
instance = FindObjectOfType&lt;UnitySingleton&gt; ();
if (instance == null) {
GameObject obj = new GameObject ();
obj.hideFlags = HideFlags.HideAndDontSave;
instance = obj.AddComponent&lt;UnitySingleton&gt; ();
}
}
return instance;
}
}
}

The above implementation first searches for an instance of the UnitySingleton component in the scene if a reference doesn’t already exist. If it doesn’t find a UnitySingleton component, a hidden GameObject is created and a UnitySingleton component is attached to it. In the event that the UnitySingleton component or its parent GameObject is destroyed, the next call to UnitySingleton.Instance will instantiate a new GameObject and UnitySingleton component. For games that include many manager scripts, it can be a pain to copy and paste this boilerplate code into each new class. By leveraging C#’s support for generic classes, we can create a generic base class for all GameObject-based singletons to inherit from:

Generic Unity Singleton

public class UnitySingleton : MonoBehaviour
where T : Component
{
private static T instance;
public static T Instance {
get {
if (instance == null) {
instance = FindObjectOfType&lt;T&gt; ();
if (instance == null) {
GameObject obj = new GameObject ();
obj.hideFlags = HideFlags.HideAndDontSave;
instance = obj.AddComponent&lt;T&gt; ();
}
}
return instance;
}
}
}

A base class is generally unable to know about any of its sub-classes. This is very problematic when inheriting from a singleton base class – for the sake of example lets call one such sub-class “Manager“. The value of Manager.Instance would be a UnitySingleton object instead of its own sub-type, effectively hiding all of Manager‘s public members. By converting UnitySingleton to a generic class as seen above, we are able to change an inheriting class’s Instance from the base type to the inheriting type. When we declare our Manager class, we must pass its own type to UnityManager<T> as a generic parameter: public class Manager : UnitySingleton<Manager>. That’s it! Simply by inheriting from this special singleton class, we’ve turned Manager into a singleton. There is one remaining issue: persistence. As soon as a new scene is loaded, all singleton objects are destroyed. If these objects are responsible for maintaining state, that state will be lost. While a non-persistent Unity singleton works just fine in many cases, we need to have one additional singleton class in our toolbox:

Persistent Generic Unity Singleton

public class UnitySingletonPersistent : MonoBehaviour
where T : Component
{
private static T instance;
public static T Instance {
get {
if (instance == null) {
instance = FindObjectOfType&lt;T&gt; ();
if (instance == null) {
GameObject obj = new GameObject ();
obj.hideFlags = HideFlags.HideAndDontSave;
instance = obj.AddComponent&lt;T&gt; ();
}
}
return instance;
}
}

public virtual void Awake ()
{
DontDestroyOnLoad (this.gameObject);
if (instance == null) {
instance = this as T;
} else {
Destroy (gameObject);
}
}
}

The preceding code will create an object that persists between levels. Duplicate copies may be instantiated if the singleton had been embedded in multiple scenes, so this code will also destroy any additional copies it finds.

Caveats

There are a few important issues to be aware of with this approach to creating singletons in Unity:

Leaking Singleton Objects

If a MonoBehaviour references a singleton during its OnDestroy or OnDisable while running in the editor, the singleton object that was instantiated at runtime will leak into the scene when playback is stopped. OnDestroy and OnDisable are called by Unity when cleaning up the scene in an attempt to return the scene to its pre-playmode state. If a singleton object is destroyed before another scripts references it through its Instance property, the singleton object will be re-instantiated after Unity expected it to have been permanently destroyed. Unity will warn you of this in very clear language, so keep an eye out for it. One possible solution is to set a boolean flag during OnApplicationQuit that is used to conditionally bypass all singleton references included in OnDestroy and OnDisable.

Execution Order

The order in which objects have their Awake and Start methods called is not predictable by default. Persistent singletons are especially susceptible to execution ordering issues. If multiple copies of a singleton exist in the scene, one may destroy the other copies after those copies have had their Awake methods called. If game state is changed during Awake, this may cause unexpected behavior. As a general rule, Awake should only ever be used to set up the internal state of an object. Any external object communication should occur during Start. Persistent singletons require strict use of this convention.

Conclusion

While singletons are inherently awkward to implement in Unity, they’re often a necessary component of a complex game. Some games may require many dozens of manager scripts, so it makes sense to reduce the amount of duplicated code and standardize on a method for setting up, referencing, and tearing down these managers. A generic singleton base class is one such solution that has served us well, but it is by no means perfect. It is a design pattern that we will continue to iterate on, hopefully discovering solutions that more cleanly integrate with Unity.
Global state can be a bit tricky to handle in Unity. All executing code must be attached to GameObjects as components, apoplectic
and GameObjects must exist in the hierarchy of a scene. There is no concept of low-level application code – there are only objects and their behaviors.

The most common method for implementing global managers in Unity is to create a prefab object that contains all manager components attached to it. This prefab is either included in each scene file, viagra 40mg
or if state must be persisted between levels, food
it is added to an initial bootloader scene that loads the prefab and calls its DontDestroyOnLoad method forcing it to survive future level loads. Other scripts grab references to these managers through one of a variety of built-in methods, such as FindWithTag, FindObjectOfType.
Global state can be a bit tricky to handle in Unity. All executing code must be attached to GameObjects as components, medic and GameObjects must exist in the hierarchy of a scene. There is no concept of low-level application code – there are only objects and their behaviors.

The most common method for implementing global managers in Unity is to create a prefab object that contains all manager components attached to it. This prefab is either included in each scene file, troche or if state must be persisted between levels, symptoms it is added to an initial bootloader scene that loads the prefab and calls its DontDestroyOnLoad method forcing it to survive future level loads. Other scripts grab references to these managers through one of a variety of built-in methods, such as FindWithTag and FindObjectOfType.

A slightly more automated and possibly more performant option is to use singletons.

Singleton Implementation

The singleton design pattern allows global access to an object, while ensuring that only one instance of it’s type is ever allowed to exist at any one time. If a singleton object exists, all references will point to this one object. If an object doesn’t exist, it will first be created. For most C# applications, this is fairly straightforward to implement:

[CODE]

In the above code, a static _______ [explain what it’s doing]

Unity unfortunately adds quite a bit of complication to this approach. All executable code must be attached to GameObjects, so not only must an instance of a singleton object always exist, but it must also exist someplace in the scene.

[singleton implementation that works with Unity]

[making things easier by creating a generic class that breaks polymorphism]

OnDisable

Leaking objects.
Global state and behavior can be a bit tricky to handle in Unity. There are many reasons for needing this: controlling menu logic, sick _____ execute coroutines separate from objects in the scene _____. All executing code must be attached to GameObjects as components, and GameObjects must exist in the hierarchy of a scene. There is no concept of low-level application code – there are only objects and their behaviors.

The most common method for implementing global managers in Unity is to create a prefab object that contains all manager components attached to it. This prefab is either included in each scene file, or if state must be persisted between levels, it is added to an initial bootloader scene that loads the prefab and calls its DontDestroyOnLoad method forcing it to survive future level loads. Other scripts grab references to these managers through one of a variety of built-in methods, such as FindWithTag and FindObjectOfType.

A slightly more automated and possibly more performant option is to use singletons.

Singleton Implementation

The singleton design pattern allows global access to an object, while ensuring that only one instance of it’s type is ever allowed to exist at any one time. If a singleton object exists, all references will point to this one object. If an object doesn’t exist, it will first be created. For most C# applications, this is fairly straightforward to implement:

[CODE]

In the above code, a static _______ [explain what it’s doing]

Unity unfortunately adds quite a bit of complication to this approach. All executable code must be attached to GameObjects, so not only must an instance of a singleton object always exist, but it must also exist someplace in the scene.

[singleton implementation that works with Unity]

[making things easier by creating a generic class that breaks polymorphism]

Enabling the base class to know something about its sub-classes.

 

OnDisable

Leaking objects.
poolFor anyone who doesn’t know, dosage we’re building RedFrame using the Unity game engine. Unity is a wonderful tool with many features that make it appealing for indie development, viagra including the ability to deploy to multiple operating systems as well as an amazingly simple asset pipeline.

The folks at Unity were kind enough to feature our game on their site, and we recommend checking it out if you want to get a little more background on the project. We have been a bit tight-lipped so far and hopefully this is a good introduction to who we are and what we would like to accomplish with RedFrame. Check out the article!

Posted in Design, Pipeline, Programming

Creating Floor Plan Screenshots

As we craft the puzzle structure for RedFrame, artificial it’s very useful to have a birds-eye view of the environment so that we can better see how puzzles physically relate to one another. I spent some time over the weekend creating a simple Unity editor script that allows me to export two very large screenshots, one for each floor of our house environment. The script creates a new downward-facing camera, sets it to orthographic mode, and adjusts its near and far clips planes to cut out only the vertical section of the house that I’m interested in. It then manipulates the camera’s projection matrix to produce an oblique projection. This oblique perspective makes it much easier to see walls and determine height, and has the fun side effect of making it feel like a top-down RPG.

Rather than capturing an image with Unity’s built-in Application.CaptureScreenshot method, I instead chose to render to a much larger off-screen RenderTexture with a square aspect ratio. This way I can guarantee that the resulting images will always be the same dimensions, regardless of how the Unity editor windows are set up.

I combined the two floor images in Photoshop as separate layers, and gave the top floor a slight drop shadow. I can easily toggle between the top and bottom floor by hiding the top layer. I’ve created additional layers in which I can create diagrams and notes. As the environment evolves, it’ll be very easy to re-run the script in Unity, producing a new pair of screenshots that can be dropped into the same Photoshop file.

You can download my floor plan screenshot script here. It was written very quickly, so if you see room for improvement please let me know!

RedFrame-House-Map

Posted in Design, Pipeline, Programming

Environment Update

It’s always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame’s core gameplay, discount link a lot of our ideas reached a fairly playable state, ed only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN – that’s for another post) we’ve retained a complete history of our creative and technical output. I’ll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as “portable”, hemorrhoids then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It’s a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned – it just didn’t work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we’ll talk more about this in future posts) and we’d lost the feeling of actively participating in this dense world we’d created. To avoid spoilers I won’t reveal the precise nature of the solution we’re currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it… but now how much of this code is actually usable? In general, it’s not uncommon for older code to have to be thrown away simply because it can’t easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn of programming that will severely bog down even a small project. To mitigate this, I structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I had instead included two generic entrypoints into the system:

  • FindNearestPortableObjectToTransform (Transform trans, float maxDistance, float angleOfView) – This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can’t be guaranteed that our solution requires us to render a camera view. I find that it’s generally best to require only the most generic method parameters necessary to perform the desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters we harm future code re-use.

Using events, encapsulated structure.
It’s always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame’s core gameplay, physician a lot of our ideas reached a fairly playable state, pregnancy only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN – that’s for another post) we’ve retained a complete history of our creative and technical output. I’ll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as “portable”, medstore then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It’s a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned – it just didn’t work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we’ll talk more about this in future posts) and we’d lost the feeling of actively participating in this dense world we’d created. To avoid spoilers I won’t reveal the precise nature of the solution we’re currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it… but now how much of this code is actually usable? In general, it’s not uncommon for older code to have to be thrown away simply because it can’t easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn of programming that will severely bog down even a small project. To mitigate this, I structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I had instead included two generic entrypoints into the system:

  • FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle) – This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can’t be guaranteed that our solution requires us to render a camera view. It’s generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.
  • FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle) – On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:
    1. Technical – It’s possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
    2. Psychological – By using consistent parameters across multiple methods, the programmer’s cognitive load is significantly reduced. When it’s easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the entire system includes a PickupController. This script may be attached to a player object and manages picking up and dropping one object at a time. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the GameObject hierarchy of the scene, or anything about the object that it is attached to. It simply scans for objects to pick up, and places to drop them, while smoothly translating them
It’s always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame’s core gameplay, allergist a lot of our ideas reached a fairly playable state, pharmacy only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN – that’s for another post) we’ve retained a complete history of our creative and technical output. I’ll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as “portable”, then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It’s a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned – it just didn’t work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we’ll talk more about this in future posts) and we’d lost the feeling of actively participating in this dense world we’d created. To avoid spoilers I won’t reveal the precise nature of the solution we’re currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it… but now how much of this code is actually usable? In general, it’s not uncommon for older code to have to be thrown away simply because it can’t easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

  • FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle) – This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can’t be guaranteed that our solution requires us to render a camera view. It’s generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.
  • FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle) – On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:
    1. Technical – It’s possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
    2. Psychological – By using consistent parameters across multiple methods, the programmer’s cognitive load is significantly reduced. When it’s easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene’s GameObject hierarchy aside from the existence of  a camera, and doesn’t require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I’ve found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you’re comfortable with this approach you’ll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve similar problems.

-Mike
It’s always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame’s core gameplay, side effects a lot of our ideas reached a fairly playable state, approved only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN – that’s for another post) we’ve retained a complete history of our creative and technical output. I’ll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as “portable”, prostate then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It’s a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned – it just didn’t work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we’ll talk more about this in future posts) and we’d lost the feeling of actively participating in this dense world we’d created. To avoid spoilers I won’t reveal the precise nature of the solution we’re currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it… but now how much of this code is actually usable? In general, it’s not uncommon for older code to have to be thrown away simply because it can’t easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

  • FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle) – This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can’t be guaranteed that our solution requires us to render a camera view. It’s generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.
  • FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle) – On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:
    1. Technical – It’s possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
    2. Psychological – By using consistent parameters across multiple methods, the programmer’s cognitive load is significantly reduced. When it’s easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene’s GameObject hierarchy aside from the existence of  a camera, and doesn’t require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I’ve found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you’re comfortable with writing reusable code you’ll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It’s always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame’s core gameplay, purchase a lot of our ideas reached a fairly playable state, buy only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN – that’s for another post) we’ve retained a complete history of our creative and technical output. I’ll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as “portable”, troche then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It’s a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned – it just didn’t work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we’ll talk more about this in future posts) and we’d lost the feeling of actively participating in this dense world we’d created. To avoid spoilers I won’t reveal the precise nature of the solution we’re currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it… but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it’s not uncommon for older code to have to be thrown away simply because it can’t easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, this a lot of our ideas reached a fairly playable state, look only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", information pills then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

  • FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle) – This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.
  • FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle) – On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:
    1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
    2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, cialis 40mg a lot of our ideas reached a fairly playable state, infertility only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", patient then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

    1. Technical

- It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.

  1. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, remedy a lot of our ideas reached a fairly playable state, only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, hospital a lot of our ideas reached a fairly playable state, life only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, psychiatrist a lot of our ideas reached a fairly playable state, buy only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, what is ed a lot of our ideas reached a fairly playable state, only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, for sale a lot of our ideas reached a fairly playable state, troche only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", global burden of disease then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, ampoule a lot of our ideas reached a fairly playable state, information pills only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, story a lot of our ideas reached a fairly playable state, women's health only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, urologist a lot of our ideas reached a fairly playable state, recipe only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, oncologist a lot of our ideas reached a fairly playable state, recuperation only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, prothesis a lot of our ideas reached a fairly playable state, abortion only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

PortableObject FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

DropNode FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
It's always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame's core gameplay, geriatrician a lot of our ideas reached a fairly playable state, only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN - that's for another post) we've retained a complete history of our creative and technical output. I'll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as "portable", then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It's a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned - it just didn't work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we'll talk more about this in future posts) and we'd lost the feeling of actively participating in this dense world we'd created. To avoid spoilers I won't reveal the precise nature of the solution we're currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it... but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it's not uncommon for older code to have to be thrown away simply because it can't easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

PortableObject FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can't be guaranteed that our solution requires us to render a camera view. It's generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

DropNode FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical - It's possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological - By using consistent parameters across multiple methods, the programmer's cognitive load is significantly reduced. When it's easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene's GameObject hierarchy aside from the existence of  a camera, and doesn't require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into.

Writing re-usable code can certainly not be easy, but I've found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you're comfortable with writing reusable code you'll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike
A few months ago I finished building and lighting the the RedFrame house environment. Not including bathrooms, treatment the house has 17 furnished rooms, more info and a couple outdoor areas. The general look has changed a lot since we last showed a demo. I've started to use higher contrast in many areas, and the general color scheme of each room has converged into a unified style, making each room feel unique. Here's a quick tour of some of the areas that convey the main feel of the game.

-Andrew

Posted in Uncategorized

Repurposing Old Systems

In any large game project it’s generally a very good idea to keep major systems decoupled. By avoiding direct method calls, shop it’s possible to build systems that communicate with one another without necessarily having to know about each other. This has one major benefits: a large system can be built in complete isolation, disinfection designed to send calls to components that either may not yet exist or may be replaced with another system in the future. Such a decoupled approach makes it relatively easy to modify systems without destabilizing other dependent systems.

While RedFrame’s mechanics are fairly simple in comparison to many first-person games, there are still a few complex systems that must communicate effectively and may change significantly throughout development. Specifically, our puzzles consist of a handful of components that must communicate with each other, but to allow us to rapidly prototype our puzzle ideas, these components must remain largely autonomous and only weakly connect with each o.

Method Calls vs Messaging

Messaging Systems
– SendMessage built-in
– Manual reflection-based calls
– David Koontz system
– C# Messenger Extended
– Flashbang’s system

C# Events

C# includes a language feature for defining events. Since events can be foreign to many Unity developers, it’s worth briefly describing what they’re all about.

____ delegates, events, how they’re called

Reflection

The most decoupled way to call a method is through Mono’s Reflection class, System.Reflection. With reflection it is possible to discover all methods, both public and private, that a given object contains. These methods may be invoked by their string name without requiring a hard reference to the method from within your code. If the method whose name you’re trying to call does not exist in the object that you’ve targeted, you can write your code to simply ignore the invocation.

This is, in fact, what Unity does internally whenever you include one of MonoBehaviour’s built-in events in your scripts, such as Awake and Update. Unity will reflect over all of the MonoBehaviour instances in the current scene, find these special methods by name, and invoke them at the appropriate time.

Writing your own reflection code can be a bit hairy, so Unity kindly includes a few utility methods that do it for you.

SendMessage & BroadcastMessage

Each script that you create inherits two useful reflection-based methods from MonoBehaviour: SendMessage and BroadcastMessage. SendMessage may be used to invoke a method by name on any object, including itself. BroadcastMethod goes one step further and invokes the method on any of the target objects children, too. Of SendMessage is a scalpel, BroadcastMessage is a sledgehammer.

With both of these tools at your disposal, you can safely invoke methods on another object that may or may not contain the method you expect.

There is one glaring issue with this approach in relation to messaging: you first must know the object on which to invoke the method!

Custom Messaging Systems

Flashbang’s system
C#Messenger Extended
It’s always a little sad to see good code slip into obscurity as gameplay changes and mechanics drift from their original goals. During our lengthy exploration into RedFrame’s core gameplay, remedy a lot of our ideas reached a fairly playable state, only to be discarded once we embarked on our next prototype. But all is not lost; by diligently using version control (SVN – that’s for another post) we’ve retained a complete history of our creative and technical output. I’ll often pursue old systems to remind myself of previous ideas that may become relevant again some day.

One such forgotten system was an object carrying mechanic that I developed about a year ago. The system offered some neat affordances for both the player and the game designer: the designer could mark an object as “portable”, then mark valid drop locations on surfaces. At runtime, when the player approached the portable object it would highlight to indicate interactivity, then they could click the mouse to pull the object into their hand. There could never be a case where the player could permanently lose the object, such as by dropping it behind a couch, because the designer would not have designated that area as a valid drop location.

It was a great system, but it became a solution looking for a problem. We quickly ran into an interaction problem common to most adventure games: pixel hunt. It’s a major failure of design when the player is compelled to click aimlessly throughout an environment in an attempt to discover interactive items. The issue is bad enough on static screens in point-and-click adventures, and a full real-time 3d environment only magnifies the problem. The system had to be abandoned – it just didn’t work in the game.

Fast forward a year. Just last week we realized we had a related problem: our core gameplay had been reduced to interaction with 2d planes (we’ll talk more about this in future posts) and we’d lost the feeling of actively participating in this dense world we’d created. To avoid spoilers I won’t reveal the precise nature of the solution we’re currently exploring, but it turns out that my object pickup system was perfectly suited for the job.

At this point I have a known problem, and I have code that can potentially solve it… but now how much of this code is actually usable? Luckily, the code came into our new project without any errors.

In general, it’s not uncommon for older code to have to be thrown away simply because it can’t easily interoperate with new systems. When it becomes more work to fix old code than to write new code, you can become trapped by constant churn that will bog down even a small project. To mitigate this, I try to structure my code in a very decoupled way.

Rather than writing my pickup and drop code against an existing player controller, I instead included two generic entrypoints into the system:

PortableObject FindNearestPortableObject (Transform trans, float maxDistance, float viewAngle)

This method searches for PortableObjects within a view frustum implied by the position and rotation of a given Transform object with a given angle-of-view. I chose to require a Transform rather than a Camera component since it can’t be guaranteed that our solution requires us to render a camera view. It’s generally best to require only the most generic parameters necessary to perform a desired operation. By artificially restricting the use of a method by requiring unnecessarily specific parameters, we harm future code re-use without adding any value.

DropNode FindNearestUnusedNode (Transform trans, float maxDistance, float viewAngle)

On the surface, this method is effectively identical to FindNearestPortableObjectToTransform. Internally, it uses an entirely different search algorithm. This is a case where conceptually similar tasks should require a similar invocation. This serves two purposes:

  1. Technical – It’s possible to swap two methods without re-working existing parameters, changing resulting behavior without having to track down new input data. This increases productivity while reducing the occurrence of bugs.
  2. Psychological – By using consistent parameters across multiple methods, the programmer’s cognitive load is significantly reduced. When it’s easier to grasp how a system works, and it requires less brain power to implement additional pieces of that system, the code is much more likely to be used by those who discover it.

Lastly, the system includes a PickupController. This is a general manager script that manages picking up and dropping one object at a time, using the main camera as input. PickupController has no dependencies outside of the scripts belonging to its own system – it assumes nothing about the scene’s GameObject hierarchy aside from the existence of  a camera, and doesn’t require any particular setup of the GameObject that it is attached to. It simply scans for PortableObjects to grab and DropNodes to place them into. By making the fewest possible assumptions, it’s able to be included in just about any project without having to be modified.

Writing re-usable code can certainly not be easy, but I’ve found that its long-term benefits tend to outweigh the cost of minimally increased development time. Once you’re comfortable with writing reusable code you’ll find that your earlier work will pay off again and again, making you more productive by obviating the need to repetitively solve the same problems.

-Mike

Posted in Design, Programming