This past weekend was the 2015 Global Game Jam a global event where people gather to create a game in 48 hours based on a provided theme. Due to general life responsibilities, including but not limited to taking care of our new Shiba Inu puppy Levi, I was not able to travel to one of the designated GGJ sites.
At the end of last year I made the decision to switch from my own custom tech to Unity3D for all future game projects and so while not being able to take part in GGJ15 officially, I figured it would be a great opportunity to have some fun and increase my knowledge of the Unity3D toolset.
The theme for 2015's Global Game Jam was announced on Friday as "What do we do now?".
A Fitting Theme
I read about the announced theme on Saturday morning and it immediately struck a chord. My wife and I have had our aforementioned puppy for a little over 3 weeks now, juggling life while taking care of a furry infant with razor sharp teeth has given me a new appreciation for all the working parents out there. Many times we have asked ourselves "what do we do now?".
With the theme in mind and puppy by side I started to note down ideas for my prototype. It would revolve around new parents having to care for a baby. Controlling the parents the player would be required to juggle their daily life tasks while making sure the baby doesn't burn the house down. One of the first things that came to mind was a top down RTS style game. Fog of War could be used in rooms the parents were not currently in, with security cameras allowing the player to check in the baby's welfare while the parents were elsewhere conducting their assigned tasks.
With families coming in all shapes and sizes and wanting physics based comedy (think Goat Simulator) to be a big part of the game I decided against a human family for my protagonists. The answer was, as always, robots. The prototype would be tentatively titled "A Beautiful Bundle of Bolts".
Building a Prototype
My first steps in building the prototype were to construct a few modular level pieces in Blender that I could then import into Unity, turn into prefabs and put together a basic home for my robot family. My comfort level with Blender is pretty high at this point and I used this opportunity to refine my UV work flow. Each level piece is a separate model, exported as an .obj but shares UVs on a single atlas. Using Blender's texture paint mode provided a great way for marking out the various UV sets in each of the pieces.
Once I had the basic level pieces imported, colliders setup and saved as prefabs my next step was to implement automatic doors for the robots to travel through. Doors were fairly simple to implement requiring only a trigger collider, basic script and door open/close animation.
Initially I had the robots following the mouse position using manual transforms, this was fine for testing but I wanted the robots to work semi-autonomously once they had been give a location to move to. Up until this point I hadn't played around with Unity's navigation components and my experience with them was by far the most pain-free part of this project. Telling Unity to bake a navigation mesh was a one click process that automatically calculated accessible paths based on level geometry.
Adding a NavAgent component to the robot prefab and replacing my manual movement code with a call to MoveTo() resulted in the robot happily making it's way to the target. When a robot reached a door however it would pass right through it. A quick google search revealed the NavObstacle component which I added to my door prefab and low and behold the robot would now patiently wait at a door until it had opened and then continue through, simple as that!
Setting up a basic UI and the security camera system was also a relatively simple process. Using Unity's new UI system was an enjoyable experience and I quickly had an interface in place for switching robots and cameras along with some basic button animations. The security cameras themselves were manually placed in each of the rooms, switchable from my GameController class via a SelectCamera() method which was wired up to the buttons by dragging the GameObject onto the button component, selecting the method from the auto-populated drop down and then dragging the appropriate camera into the argument field.
With all the pieces in place I began working on the robot baby's AI. For now the AI is incredibly simple yet I am pleasantly surprised with the outcome. Each update the robot baby does a sphere collision check for any game objects on the "Baby Detectable" layer. It then sorts the results of this check based on speed of detected object and distance to detected object, favoring faster movement then closer objects. Once the results have been sorted on this criteria they are then sorted again doing a basic occlusion check by firing a ray from the baby to the game object in question.
if the first collision is on the "Baby Dection Occluder" layer (walls and other level geometry) then the game object is considered occluded. Unoccluded game objects are given preference over occluded ones. With the results sorted, the first game object in the list is considered the robot baby's new target and it begins a path towards that object.
Using just this simple logic the robot baby can and will travel all over the level. It will follow parents. It will move towards opening/closing doors. It will following moving physics objects. To test this in the prototype there are a large number of balls that drop into the first room of the house. It is interesting to watch the emergent paths that the robot baby will take; following one ball, bumping into another ball on it's path, that ball rolling into a door's sensor, the robot baby then targeting the movement of the opening door. These are exactly the results I had in mind and I look forward to implementing more interactive/autonomous set-pieces and seeing the chaos the evolves!
The End Result
Spread over 4 days I spent roughly 30 hours on the prototype. With all design, art and coding done solely by myself and still learning many of Unity's components, there were a lot of things I did not have time to implement. For all intents and purposes this is essentially an AI/physics simulation with the core controls and security camera system in place. As it stands however I think the concept has legs and will be continuing to update it over the coming weeks. As always with this type of thing, it was difficult to stop working on it but as it was prompted by the GGJ15 I wanted to post what I achieved within the time constraints.
If you'd like to play around with the initial prototype click the download link below. Currently it's OSX only and completely untested on any machine other than my own 2013 Macbook Pro.