CanHaptics Lab 2: Intro to Haply
To start off, as I was in COVID-19 quarantine when the Haplys were delivered, I received a pre-assembled Haply so that I wouldn’t face any difficulties while assembling it due to lack of hardware. I did spend some time exploring the device and noticed a few fun things, like the magnetic end effector that can be replaced with other versions for various applications.
I then took a look at the Processing and Fisica examples provided on the site. One of the problems I ran into here while trying out these codes was I didn’t realize the switch on the Haply needed to be on to receive force-feedback. I spent a good 30 minutes trying to figure out what could be the issue (I thought I might have messed up while flashing the device), until Kattie pointed out that the switch might be the root cause (thanks for helping out at 3 AM Kattie! *-*)
I also kept forgetting to keep the end effector in the initial position every time while running the sketch, which kept messing up my calibrations. But eventually, it kind of became second nature. I also noticed that I had to manually select the Board in Arduino while flashing the device — I wasn’t sure if it was Arduino M0 or Zero, but then I saw the port shows the board detected so I went with Arduino Zero and things worked out fine.
Designing the maze:
Once I was able to sense the feedback and had the environment set up, I got down to designing the maze. I wanted to gamify this experience, as I’ve always wondered how do people create those simple arcade games but never really tried it out. This seemed like a good opportunity for that.
The “Wall Physics” example already provided the key code snippet to create a wall in the virtual environment, and the “Maze Physics” and “Shapes Physics” examples further showed how I could create fun elements and add some dynamic control.
For the game, I thought of a simple puzzle where the user could guide a dog from one end to another. My design was partially inspired by the layout at UBC, and then I randomly added elements like dogs, trees, etc. to create a more convincing environment. This was my initial sketch of the design:
Coding the maze:
To convert this into the digital environment, I used various shapes and colors to create objects like cars, dogs, trees, and buildings. I used the HTML color picker from W3Schools to select various shades that I felt would make the objects more recognizable. Instead of simply having walls as boundaries, I placed the objects such that they too became a part of the maze.
I then wanted to label the elements, to add more meaning, and also some guiding steps for the user. So I referred to the “Maze Physics” example that used text to integrate that too into my code.
I also wanted the user to know when they had solved the puzzle, so I added some code to switch to a congratulatory screen when the dog avatar reaches the owner.
I also referred to the Fisica library for additional functions like removing the boundary stroke from objects, as well as added my own avatar of pawprints instead of the default Haply avatar.
The code for Lab 2 is in the folder sketch_7_Maze_Lab. The img folder in the same contains the avatar png used. Icons…
Fun update: After uploading the code on GitHub, during a final check I realized I had uploaded old code. The reason — Processing does not autosave code when uploading to the device and my laptop had unexpectedly crashed an hour ago which meant all my latest code was gone :) Thankfully this thread and the last answer helped me get it back so putting it here in case this happens again and I come close to flinging my laptop out of the window.
It took me around 4–6 hours designing and coding the maze — a majority of that time went in figuring out the coordinates for the various text labels and objects to place them on the screen. I also spent some time trying to get the avatar to not rotate, but I couldn’t find the function or code for the same, so I left it as it was.
In the future, I could definitely see myself expanding this, adding more elements, and modalities — sound effects of cars/dogs when the avatar gets too close, a fun sound when the avatar reaches the owner — such effects would make the game more fun. I would also want to increase the complexity by creating a more intricate layout, but for the scope of this lab, this feels good enough.
This was a fun lab where I got to try out gamification, albeit on a minuscule level, and also experience force-feedback environments. Learning how to create virtual haptic environments and troubleshooting the hardware is also something that will prove useful going ahead.