Skate Space – Q Learning and Value Iteration

This video is a quick demo of the project I made for my AI class in Fall 2018. This project explored value iteration and Q-learning in a domain where agent movement does not stop until it hits a wall or an edge. I worked with 4 other classmates on this project. I served as the lead programmer and implemented both value iteration and Q-learning. I also implemented a level creator that takes JSON files as input and creates grids out of the information in the JSON file.

The Github repo can be found here.

PolyRules

During my time at the UCR Brain Game Center, I have been charged with numerous projects. One such project is PolyRules. PolyRules is a pattern-matching game that is used for several studies involving inhibitory control. In the game, the player is presented with shapes, colors, textures, sizes, and sub-shapes and have to match the properties. I did not build this app from the ground-up. I inherited it from a previous developer and have been tasked with maintaining it and adding new features, as needed.

UI Changes

One large change I implemented was how users access different levels. In the original version of the game, there was no level selection. Users were just presented with new levels one after the other with nothing in between. We wanted to allow the player the freedom to go back and play other levels and also to have a place to see their progression. I implemented the above layout, which lets you see what levels there are, whether you have unlocked the level, and allows you to navigate to them by clicking a bubble at the bottom or by swiping left/right on the screen.

This view also shows you the level layout that the level has in the background. It also shows the “Paulie” character, which lets the player know what kind of pattern-matching to expect on a level. Paulie was conceived by myself and the art and animations were created by Yvette Chen, who was an intern for us at the time.

Procedural Level Generation

At the time I took over the project, we were gearing up for a study that had a need for several thousand levels of semi-procedurally generated content. When I say semi-procedural, I mean that the properties you match on and the level layouts can be randomly generated based on some pre-defined constraints. At the time, there was no easy way to make bulk levels. The approach back then would have been to copy-paste a LOT of JSON data and make modifications per file. Of course, being a programmer, I wanted to make my life easier so I didn’t have to spend the better part of several days manually modifying JSON files.

So, I developed a Unity editor extension to allow me to specify constraints and generate levels in bulk. This generator allows me to specify numerous constraints that dictate what kinds of levels will be generated. Upon running the generator, tons of level JSON files are created and put where they need to go in the file system.

Needless to say, this interface saved me many hours of manual, annoying work. It also provided me a great amount of control and precision over how our levels are generated. Since all the generation happens in one place, it’s easy for me to take collaborator requirements and implement them.

Versioned File Downloads From the Cloud

Another big feature I implemented was a versioned file system that allows the app to download level templates, food images, and level data from the cloud. This system has been a great benefit because prior to this system existing, if we had to make any bug fixes to a level template, we had to make the change and push an entirely new build to the app store. Pushing builds to the app store takes longer than a day, typically, and then you require users to be updating the app to get the change.

My new system, instead, checks in with our AWS infrastructure to see if new versions of the files exist on S3. If so, those files are returned to the client and the new data is written to disc. This allows us to make quick changes, push them to our S3 bucket, and have those changes propagated down to clients the next time they login.

Food Rules

Users match the color because carrots are a healthy food

A big feature I implemented was an additional game mode where users still match patterns, shapes, and colors, but only when they see healthy food. When they see unhealthy food, they abstain from matching for 3 seconds and then the next stimuli shows. This feature is used by Salvy Lab at Cedars-Sinai hospital in Los Angeles for a current study and was used in a previous study that has been published.

This feature required significant refactoring to the existing codebase and also required several hundred MB of food images to be stored on the device for rendering. This feature necessitated the versioned file checker I implemented and discussed, above. The reason is because we didn’t want to ship 700+ MB of data with the app, especially when we also run studies that don’t use the food rules feature at all. It would be wasteful for those users who don’t need the food images to have to download a 1+ GB app from the app store onto their device. Instead, the images are downloaded per-device when the user signs into a study that uses the images.

Food Rules also required new level layouts, which had chef tables and restaurant tables with a sort of subway tile theme background. These features were implemented by me. The background was the most interesting part because of the way PolyRules renders the backgrounds. In PolyRules, the backgrounds are rendered directly using vertices. So, I had to develop a background layout that would properly generate rectangular graphics using individual vertices.

Spanish Localization

I helped implement Spanish localization into the app. To do so, I worked with our intern, Calvin Yoh, to implement his localization system he developed for the BGC into PolyRules. This also required me to work with several interns and collaborators with native fluency so they could translate sentences into Spanish. The localization system allows the user to select their language between Spanish and English. Their preference is saved on a per-user basis and is remembered whenever they sign into the app. We plan on having more languages in the future.

What I’ve Learned

I always like to think about what I’ve learned through these projects. PolyRules has taught me the following:

  • Rendering 2D/3D graphics with vertices
  • Building complex editor extensions
  • Working with Unity web requests to pull data from the cloud
  • Rapidly adjusting priorities and features based on user requirements and feedback
  • Implementing animated UI elements
  • Diving into an existing, large, undocumented codebase and building onto it and refactoring it, as needed.

BGC Study Portal

A currently ongoing project I’ve been doing at work is creating a website for use by internal BGC staff, BGC collaborators, and study participants to manage studies, visualize study data, and view study performance. The site is built using Microsoft’s new Blazor technology hosted on ASP.NET core running on an AWS EC2 instance.

Authentication and Permissions

Users are required to login to the site. Authentication and Authorization works through ASP.NET Core’s authorization tooling, which communicates with AWS Cognito for validating authorization tokens. All users have roles and each role has a set of permissions that restrict/allow certain features and views on the website. For example, researchers can view raw log files and data graphs, but participant accounts can not see any of that information.

Participant Management

Researchers and research coordinators are able to view anonymized participant information within their study. Participant management occurs here, such as modifying the participant’s status in a study or sending the participant required surveys. Participants can also be enrolled directly into a study from this page.

Analytics

The site is capable of rendering complex charts. At the moment, none of the charts are incredibly complex, but I am currently working with a data analyst to implement rendering charts created with the R programming language onto the page.

Architecture

The site runs Blazor web assembly on the client-side and ASP.NET core for the web server. The web server runs on an AWS EC2 instance running AWS Linux 2. The UCR BGC has numerous Unity applications that run on iOS, Android, Windows, and OS X. These applications utilize a REST API hosted on the AWS API Gateway. This API handles the uploading of log files and error files to AWS S3. Additionally, the apps request files from AWS S3 through this API.

The study portal interfaces with the log and error files that applications send to AWS. Additionally, the study portal interfaces with DynamoDB, AWS RDS running MySQL, and AWS S3 for a variety of features, such as pulling survey data, participant information, and log files. We are slowly migrating away from the serverless API Gateway architecture and back to an architecture where all our API’s are hosted on EC2 through our ASP.NET Core application.

Overall, this project has taught me a lot about system design, cloud computing, ASP.NET Core, and Blazor.

Listen Auditory Training

My first project for the UCR Brain Game Center was to add some new features to their existing auditory training game, Listen. Over the course of several months, I implemented a graphics overhaul, custom player skins, an N-Back memory training task, scoring, tutorials, dialogue, and a UI overhaul. Listen can be downloaded on the App Store and Google Play.

Graphics

Before my involvement on the game, the environment in the game was just a dark cave. This was not very engaging for the user and was, honestly, a bit depressing. Considering that users would likely be playing the game for 30+ minutes at a time, for multiple days (as per our study requirements), we needed to make sure the visual aesthetic was a bit more inviting. I decided to implement a nature theme where the nature around the player bloomed and decayed depending on player performance. To implement this, I developed a dynamic mesh creation system for the ground, which would spawn vertices and create quads out of them using perlin noise for height randomization. Additionally, I used the mTree asset for the vegetation in the scene. This asset was incredibly useful because it saved me the effort of having to implement a tree algorithm.

Skins

I also added new skins for the player. In this game, the player is just a series of particle systems combined together. The skins are the same way but the particle systems are modified to look distinct. The skins are unlocked by completing achievements within the game, such as getting x amount of gold medals, for instance.

N-Back

The N-Back memory task existed in the game, but it was not functional enough to be used in a study. I updated the task so that it could accommodate up to 2-back. The user starts at 1-back and once they have sufficiently mastered it, the 2-back tasks unlock for them.

Scoring

I added a scoring system to the game so the user has some kind of measure of how well they are doing. The scoring is relative to the user’s own performance. We did this because we have a lot of participants who have hearing deficits and might struggle with tasks that somebody with normal hearing considers easy. We don’t want participants to feel discouraged, so we made sure the scoring is adapted to their own standard performance. We do this using a running weighted average. When users complete obstacles in the level, they are given a Gold, Silver, Bronze, or No medal, depending on how well they did. Their medals are tracked on the UI and an overall average is given to the user at the end of a task.

Dialogue

I also implemented dialogue into the game. However, there are currently no NPC’s with dialogue in the release version of the app. The original intent was to have a story in the game and NPC’s that you sometimes speak with. My testing shows the dialogue system does work, but it is not currently being used in the release version of the app.

Future Features

Ultimately, I envision Listen becoming a simple strategy game with the endless runner game mixed into it. I’ve been managing an intern for the last year on the development of a prototype. My goal is to make the game something that is enjoyable outside of a study context. I want it to be an app that random people around the world want to download and play because it is fun. To this end, I see a way for the game to implement resource management, city building, and simple zero-sum strategy elements. I won’t go into the details, but I do hope to see this vision become a reality in the coming year.

Wendy’s Colors – Hack Fresno 2018

I’m way late to write this, but here it goes. Last year, I participated in Fresno’s local hackathon, HackFresno 2018. It was my first time doing anything like this. We had 2 days to come up with an idea, develop that idea, and then present it to judges. I worked with my classmates, Zaira Rivera and Agustin Rivera. We ended up winning the best app for the Education category. You can read about it on our poorly constructed Devpost.

The program was called Wendy’s Colors. The idea for the app was based on color overlays that are used by some dyslexic people to help them read easier. Here’s some research on the topic. We wanted to develop a Chrome Extension that could recolor the letters on websites. We also wanted to develop an app that could do the same for text in a book. The idea was to take the text, try to intelligently generate a color palette, and then apply that color palette to the letters of whatever text we were working with.

Agustin created a chrome extension that recolored the words on websites. I worked on a Unity3D program that was intended to work on a smart phone, but I never got it that far. The Unity program utilized Vuforia’s OCR libraries to read text from a page. Once the text was digitized, a color palette would be created and applied to the digitized text. The text would then be output on the screen using the palette.

There are obviously a lot of bugs with the program as-is, but we only had 48 hours to get this done. We were starting from a place of inexperience with chrome extensions and OCR technology. Overall, I’m proud of what we did.

Circles OST

circles_logo1

My first non-programming post! A few years ago, I worked with my very talented friend, Arun Naina, to create music for his kinetic novel, Circles. At the time, I hadn’t yet gone back to school and I was still making music regularly. I created many different songs and worked with Arun to determine what he liked and didn’t like or what needed to be changed in the songs. We ended up with 6 songs in the final game (the game was short).

This was the first time I’d made music and SFX for a game. It was really fun and creative. I had to approach music differently than I normally did. In Circles, we wanted the music to not be too distracting. It was mostly going to be providing ambiance and reinforce the mood of a specific scene. In hindsight, I had a few songs that were more busy than I’d like, but I’m still proud of the work and learned a lot from the process.

I wrote and recorded all the music. I used Cubase 5 as my DAW and used several of the built-in VST’s as well as the sounds on my Motif xs8. The soundtrack can be found on my soundcloud.

Node Based Quest System (NBQS)

Alongside my Node Based Dialogue System (NBDS), I created a node based quest system. This is a Unity Editor extension that provides a relatively easy-to-use interface for users to create quest systems. Here are the core features:

  • Create quests with unique names
  • Create objective paths in a single quest, which allows the user to concurrently complete objectives
  • Create linear objectives within each objective path
  • Specify which NPC gives the quest
  • Specify prerequisite quests that must be completed to unlock the current quest
  • Specify game states that must be true before the quest unlocks
  • Create multiple dialogue interactions for quest description/acceptance, in-progress, and turn in.
  • Easily create and remove game states using the State Editor tool
  • In-depth written documentation and introductory video tutorial

I mostly made this asset as a personal project over the summer. However, I do hope to update it and fix bugs after I graduate this Fall.

You can view the github repo here. The Unity Package, specifically, can be found here.

Node Based Dialogue System (NBDS)

One of my projects this summer has been to create a dialogue system in Unity. I wanted a system that allowed you to create customizable, branching dialogue between characters in a game using a custom Unity Editor UI. I’ve hosted the project on Github and the Unity asset package is downloadable from the root directory. I hope to continue refining the system and performing bug fixes as time permits.

The system is implemented using Finite State Machines. I felt this was the best data structure to use for such a system. I played with the idea of using tree data structures, such as a BST or B-Tree, but FSMs made the most sense when I wrote everything out on paper.

Here’s the how-to video, which shows the features of the asset.

Rock Bot

Spring 2018 semester is over and I wanted to share my project I worked on for my game development class. The game is called “Rock Bot”. The story is that you are playing as a formerly deceased rockstar whose preserved brain has been transplanted into a robot body in the distant future. Humanity has transhumanized themselves into machines in an attempt to survive the increasingly harsh environmental conditions on Earth. In doing so, they made a decision to eliminate emotion from their programming so they could focus on maximizing productivity to meet a common goal. Your job is to bring emotion back into the world by, basically, rocking out on everyone.

My role in this project was group leader, game designer, and lead programmer. I learned a lot about game engine development and trying to delegate tasks to team members and how to be a better leader. I think I still have a lot to work on as a team leader, but I learned a lot from my failures and hope to apply what I learned on my next group project or job.

The video above details the development process and features.

Github.

Wrong Dimension

Wrong Dimension is a 2D platformer that utilizes depth as the second dimension instead of height. I came into this project halfway through as the lead programmer. I assisted in refactoring the designer’s (Michael Troup) code and implementing simple, patrolling AI, UI elements, graphic scaling, and custom button mapping.

We used the Unity3D game engine with C# programming.

Click the link above to see the game’s Steam Store page.