Team Anxious, Project Iteration 1

Unma Desai
10 min readMar 8, 2021

--

To provide a brief overview of our course project, we aim to create an affective emotion journal that can be used by people suffering from anxiety by providing guided reflection (in the short term) while also serving as documentation for therapeutic purposes if required (in the longer term). We also discussed multiple modalities for this experience — in the form of an art journal, or music generation, or gamification, centered around haptic feedback. We decided to roughly structure our work by dividing the project into iterations — in this first iteration we majorly focused on analyzing what base haptic sensations we could derive from the Haply that could be potentially calming.

Different potential modalities for our project

Initially, we began by brainstorming potential sensations, like popping, squishing, spinning (like a fidget spinner), and were inclined towards working independently on particular sensations and then reporting back and discussing our findings. However, after Sri’s Haply broke, we moved to a more collaborative process (with its own pros and cons) to evaluating the haptic feedback together over Zoom calls (as my computer hates Discord :))

We met approximately once every 3 days for around 2 hours or so; we didn’t really stick to a schedule and would meet up whenever we’d be free or would like to try something out.

Base Sensations:

Fluid properties:

To start with, we used the existing lab codes to see what were a few different sensations we could feel. We found the Maze example intriguing and particularly wanted to test if the water layer had a higher density or friction that would result in a different haptic experience. However, even after changing the configuration in the code, while visually we could see the layer difference, haptically it still felt the same — something Raquel and I both confirmed.

We also noticed that we could feel haptic feedback mostly only when the end-effector was interacting with an object which was feeling the feedback. For instance, if I just pushed the ball through the water, I could not feel the force until the ball is pushed against the wall - there was no haptic feedback when I was interacting just with the ball.

The maze example we explored for fluid properties

Running along a wall:

One of the sensations that we wanted to explore was the feeling of running the end-effector across a gradated surface, somewhat inspired by stress-relieving games where you cut grass or similar, so we laid out a bunch of walls in a square pattern, spaced out.

We initially tried spacing them all equally, which gave a bumpy sensation when running the end-effector across one row.

Equally spaced gradation

However, we felt this haptic experience was not particularly ‘therapeutic’ and interesting. So we then tried out unequal spacing to see if the transition from slow-to-fast movement makes a difference since it feels like you are moving slowly when the walls are spaced further apart, as opposed to when they are closer together.

Unequal gradation

For this particular experience, I had to map out the wall positions for each wall. Initially, since we just wanted to test the experience, I did a very basic code of putting in the coordinates for each wall manually and ended up with 300 lines of redundant code. Once we had finished this interaction and had moved towards a more final form, I added functions and loops to reduce it to a block of around 30 lines — this was a fun part as it refreshed my basic coding skills.

I really liked this interaction as I personally found it calming to run the end-effector across the walls and the added sensation of slowing down/speeding up makes it seem like there is a start and finish to the sensation as opposed to the equal gradation one that kind of just keeps going.

One thing we did notice though was that the feedback was much more prominent on the bottom wall line, as opposed to the vertical ones or the top row. I believe it could partly have to do with the Haply’s slope itself — the surface slopes downwards leading to a more naturally stronger push along the bottom wall which leads to stronger feedback. It also felt more natural to push along the bottom wall as opposed to the side or top ones, something worth exploring in our next iteration.

We also explored the ball’s motions — if interacting with the ball could be an interesting experience. We played with the gravity to make the ball more/less bouncy, but then realised the ball by itself provided no feedback, and the experience was just more visual than haptic, and hence didn’t explore that further.

Another interesting finding here was that we could actually move the walls with the mouse, even if they were set to a static position, promptly breaking the program and generating a bunch of errors. We realised this was not the expected behavior and so didn’t do that again :’) but it was a good thing to know.

Also interestingly, initially the walls were spaced a bit closer to each other, which gave me quite a good haptic feedback of the changing gradation. However, Raquel couldn’t really distinguish the gradation difference, until we increased the spacing. This raises the question of how much difference can we tolerate due to the device, up to the point where it does not affect the haptic experience. This will be something we will keep in mind going ahead.

Squishing a blob:

We came up with the idea of a squishing sensation, like a stress ball or squishing tomatoes, which could be calming in a way. We were also partly inspired by Kattie’s maze code where she had implemented the FBlob object. We tweaked the code such that the blob became movable, but then realised the blob by itself did not provide any haptic feedback on interaction (similar to the ball which was an FCircle object).

We could also push the end-effector completely inside the blob, which slowed down the program though. The blob even otherwise was quite computationally expensive when compared to the other Fisica objects. Also, despite us adding several properties like friction, density and restitution, the blob simply did not provide any haptic feedback by itself, and so we decided to place walls behind the blob. So while the user does not see these walls visually, they do the job of providing the force feedback whereas the blob provides the (pretty accurate) visual feedback of a squishy surface being pressed. We also added a squishy sound to further augment the haptic feedback that is produced when the end-effector touches the walls.

The squishy experience feels like pressing against a ball of slime or so, you see the surface press like a squishy substance, hear the sound of an object being squished, and feel force feedback as one would from a slightly bouncy surface.

Popping:

The popping sensation was another one driven by existing stress-relieving games, where users can pop bubbles or so. We were curious to see how it would feel to pop bubbles with haptic feedback. Initially, we tried popping by encasing a movable ball object between walls, and pressing the ball against the wall would cause the ‘pop’ effect when the ball gets pushed out of the way. We then made the walls invisible and coded the ball to disappear when the end-effector touches the wall after pushing the ball out of the way, creating a more realistic visual feedback of the popping.

Ball popping with visible walls

We also added sound-effects for the popping, and that was an area where we spent quite some time since we kept experiencing latency issues — the visual of the bubble popping was way faster than the sound. We tried introducing delays for the visual, using the delay function, as well as just adding a for loop, but neither worked. Ultimately we found that using a WAV version reduced the latency by a lot as compared to our original MP3 version, so we switched to that. We also noticed that the latency was very much machine-dependent — based on how many other processes you had running at the time, and the general processing speed of your machine, the delay was more/less. The sound effects did add to the haptics though, so it was a big win.

There was the fact however, that since we added our Audio in the runnable part of the code, the audio kept playing repeatedly whereas we only needed it to play when the ball is popped. To sort this, we simply moved the audio to a function and called it on contact.

Sri also then found the setSensor function, which meant we could get rid of the walls, and simply use a static object that disappears after a time interval, leading to the force feedback of an object popping. This greatly enhanced the sensation as well as the visual feedback, and then we were also able to add an image of a bubble instead of just shapes and colours, which further enhanced the haptic experience. However, we still need to work on making the image disappear in sync with the audio, so we just used a shape for the demo.

Bubble pop with sound

I also made a balloon version :’) just because it looked like more fun, and changed the end-effector icon to a thumbtack so the ‘popping’ looks more realistic. We also added a reset option for testing, so we wouldn’t need to restart the sketch each time — we just added an extra circle on touching which the environment resets, and made the circle invisible.

However, and this being one of the cons of working on code simultaneously, we couldn’t integrate that version and just went with our initial setup. The haptic experience feels like a bubble popping — there is a quick force feedback for an instant, that feels like something was ‘popped’ or suddenly disappeared — and the audio and visual components augment the experience.

Balloon version (no sound)

Vibrotactile motion:

Raquel had used the non-Fisica example for the maze lab, and she mentioned how it felt quite different from the Fisica one, in terms of rigidity. So we explored that sensation, and found that the non-Fisica walls actually were more flexible and had many more configurable properties.

One of those, kwall, is what caught our attention. By increasing the kwall by a great amount, we could actually generate vibrotactile feedback along the wall length — the motion seemed akin to the kind of feedback a massager would give, which can have a calming effect. By running the end-effector across the wall lightly, we could feel a kind of vibratory, bouncy motion, also echoed on the GUI. We also noticed that if we pressed against the wall hard enough, the resulting feedback would also be as strong, resulting in the end-effector being flung back as if a compressed spring was released.

Showing all three motions for the wall

Combining the modes:

Ultimately, we created a singular application where the user can switch modes to experience each of these sensations. Since this is just an initial iteration for the haptic sensations and not the GUI we will ultimately end up with, we did not give much thought to the design or layout, focusing on making the sensations as distinct and clear as possible.

We had a great time integrating the code — each of us had been working on almost all sensations, albeit we each chose to focus on any one in more detail, which meant we all had different versions of codes at any given time. We made it a point to check in on who was working on what, and declare if we were working on something someone else might also be working on. We also shared whenever we pushed some code to make sure we all had the same updated version at all times. However, this still meant we ended up with some conflicts and a messy repository, but we mainly kept the multiple versions now seen there as they contain our previous not-so-great-but-might-be-useful-later sensations. We do need to rename them in a better way though :’)

The final version:

Reflections:

I’ve interspersed most of my reflections for each experience within those sections themselves instead of collectively writing them here, but a few general summarized reflections:

  1. It’s not necessary that even with the same device and same code, one can produce the same haptic experience.
  2. We probably need to find a better way of organizing our code in the repo to make collaboration easier.
  3. While we have kept the haptic feedback as the core component of all our interactions, the audio and visual feedback did augment them significantly — incorporating them in the end project will be pretty useful.
  4. Multiple modalities need to be precisely synchronized — we initially had the audio-visual-haptic feedback for the popping one with individual lags; ruined the experience. Also, audio/visual modalities can be machine dependent, for instance the audio on my machine was delayed by a lot more than that on Raquel’s — also something to keep in mind.

Are the Haplys different?

Sri got a replacement Haply recently, which ended up being the purple one, which he reported felt much smoother and even had some minute structural differences as opposed to the black one (like the end-effector actually touches the plate in the purple one). This partially answers an interesting question where Raquel and I felt quite differently on some of the sensations — Raquel often had to increase the forces/spacing by quite a bit to feel a similar haptic feedback as I did, although we did communicate that verbally so who knows it might just have been in our heads. But that is certainly something we’d like to look into, if the base devices themselves provide different experiences.

Contribution:

  • Brainstorming for different potential interactions
  • Coding and testing mainly popping and gradation examples, independently as well as collaboratively

What’s next?

For our next iteration, we have a couple of milestones:

  1. Refine the current experiences, make them more high-fidelity, fine-tune the technical glitches
  2. Test these refined experiences with the people around us, and see if the sensations make sense to them
  3. Start iterating on a complete mode (art/game/music). Explore these three modes by brainstorming (as we have found that works a lot better than independently thinking of ideas and reporting back) and try implementing those ideas.
  4. Test out these modes wizard-of-oz style with people around us. Use the feedback for iteration 3, where we will move ahead with any one of these concepts towards a final, refined product.

--

--

Unma Desai
Unma Desai

Written by Unma Desai

CS graduate student at the University of British Columbia, with a focus on Human-Computer Interaction

No responses yet