Team Anxious, Project Iteration 2
Code here: https://github.com/sriGanna/teamAnxious/tree/main/Iteration2
As a quick recap, for our first iteration, we decided to explore the interaction space with the Haply, figuring out all the different haptic interactions that users could have with the device and interface. We kept the visual elements to a minimum and focused on deriving as many unique haptic sensations as we could that might be interesting from a reducing-anxiety standpoint.
For our second iteration, we decided to refine the sensations we had derived so far, adding more visual and audio elements, creating environments for the user to interact with.
We decided to collaborate through this iteration as well, since we had discovered this style of working helps us get the best results. We brainstormed across stages together, and then worked independently partially on particular sensations, while collaborating to test the environments within the team.
For this iteration we went through a couple of stages:
1. Fixing interactions from Iteration 1:
We wanted to fix certain technical issues with the interactions before proceeding to refine them, particularly technical issues. Sri and I worked on fixing the Popping sensation, to synchronize the audio and visual feedback which was still lagging. Instead of making the bubble invisible, we proceeded to simply remove it instead and found that that helped reduce the lag considerably.
2. Testing out interactions from Iteration 1:
We decided to perform user testing for our interactions developed during Iteration one, using free-word associations. We wanted to find what emotions people associated with the haptic interactions and if they found any of them particularly calming.
We created a test environment encapsulating all the interactions and sent it out over the CanHap course Discord channel to recruit participants from the course. We also wanted a mix of haptic/non-haptic participants and so Raquel and Sri asked their housemates/family members to participate who had no prior haptic knowledge.
We brainstormed over the questions we would like to ask participants and then created a Google forms survey.
We asked users to freely explore the environment to see if they could find any new haptic interactions as well as a guided version where we instructed them to interact with the environment in a certain way. There were pre-test questions about anxiety and post-test questions to see if they found any of the interactions calming and what were methods they might currently be using to cope with their anxiety.
Summary of Results:
We had a total of 5 participants who responded, two familiar with haptics and from the course, and three with no familiarity. All participants indicated they had experienced anxiety/stress often or very often in the past six months, and used breathing/meditation/music or other distractions or positive affirmations to cope with it.
Results were quite varied for certain interactions, like running across a gradated wall (interaction 1), breaking the wall (interaction 4) or pushing through a wall and letting go (interaction 5). Participants gave mixed one-word answers, like ‘uncertain’, ‘entrapped’, ‘amusing’.
For higher-fidelity interactions like popping (interaction 6) or squishing the ball (interaction 7), which had both visual and audio effects, participants had more of a unified reaction, with some saying they found the popping one ‘surprising’ and the squishing one ‘gross’.
At the end of the survey, we asked participants how calming each interaction felt, using a 7-point Likert scale from strongly disagree to strongly agree. Responses were generally split across the board, with interaction 4 (breaking through the wall) feeling the most calming to people.
The other interactions did not particularly feel calming, which might have been since we were only reviewing haptic feedback, as opposed to integrated, higher-fidelity audio and visual feedback as well. Interactions 2 and 7 (pushing ball against a gradated wall, and squishing), in particular, felt most disturbing, which might be attributed to poor sense of user control for the former (‘Stressed. I can’t establish a push as I want, as the ball keeps “moving out” from the gaps’) and an annoying squelching sound for the latter (‘The haptic sensation alone makes me feel a bit intrigued, but with the squelching noise I get the gut feeling that whatever it is I probably don’t want it on me (or my end effector)’).
Moving ahead, we decided to add visual and audio elements and refine the interactions for iteration 2, providing a unified context and environment to see if that might change users’ perceptions.
3. Narrowing down on a ‘medium’:
When we started out with the project, we had multiple potential mediums we wanted to explore, ranging from art interventions to gaming environments, to musical scenes users could interact with.
We decided to converge on a particular medium to which we could then tie the haptic interactions and proceed from there.
We brainstormed on a mindmap to list out all the possibilities and decided a combination of an art-based and exploratory intervention seemed to support our idea and discovered interactions the best. We wanted users to be able to draw art free-hand as well as explore pre-built spaces.
4. Exploring the design space:
The design space for haptic interactions is extremely large, and so we decided to brainstorm a bit and try to categorize various parameters so we could narrow down which types of interactions we might want to include. We used Miro to create a collaborative board where we put up all the concepts we could think of and then created an affinity diagram:
By creating this diagram, we were able to find parameters we could integrate into our application, and ensure we were covering as many parameters as we wanted, avoiding repetition or skipping ones that were important.
5. Sketching:
Our final application aims to allow the user to create art through the interactions, in the belief that the art would serve as an artifact for the user to take away, as well as return to for reflection in the long-term, and the act of creating the art would be therapeutic.
We brainstormed together to try and come up with interactive environments to create art by integrating the sensations we had explored earlier.
We came up with concepts like slingshot-ing paintballs, popping paint-filled bubbles, trampolining across the canvas, guided movement through a path as ways of creating art through different interactions.
There were a couple of ways to categorize the ideas we thought of:
1. Active vs Passive user mode: The Active mode would be where the user would be given a blank canvas, and they would be free to interact with it and draw on the canvas free-hand, as they pleased.
We thought of introducing colour palettes, creating different brush strokes for different emotions, varying the haptic feedback based on the intensity of emotion or so, providing the user the agency to choose which palette/brush they’d want to use. However, attaching emotions to colours from the start would be presumptuous, and this mode would require a lot of effort and so we postponed this for the next iteration.
The Passive mode would be where the user would interact with a set of elements already on the canvas, and they could create art through different actions like popping bubbles or slingshots or so.
2. Momentary vs Guided interactions: We wanted users to be able to use the intervention however they wanted to, but also if some users might prefer a more guided version with certain steps we wanted to be able to provide that as well. Hence, we decided to incorporate two modes, a momentary mode where users either draw free-hand or interact with the elements however they’d like, or a guided mode where users follow a step-by-step process through the environments, like pop 10 bubbles then slingshot 4 red balls or so.
I sketched out the environments we had discussed using Procreate just so we’d all have a clear idea of what we were trying to make:
For this iteration, we decided to focus on the momentary passive mode — creating environments where users can freely interact with already-existing elements.
We built three environments:
1. Slingshot
2. Popping bubbles
3. Bouncing blobs
6. Developing the environments:
As each member had a functional Haply this time, we were able to divide the work more easily, and each of us took up a particular environment to make. There was some code overlap between all, so we did troubleshoot and reuse code where possible.
Raquel worked on the bouncing blobs environment, Sri worked on the slingshot environment, and I worked on the popping bubbles one.
The bubble environment was already in good working condition since our last iteration, I simply needed to add the visual artistic elements.
Sri found a Splat class that would help us create paint splatters:
The bubble example from last iteration was simply one bubble, so I had to create multiple of those, as well as figure out locations:
Sri had already created a checkSplat and animateSplat function to detect when the end-effector touches the canvas and to create a paint splatter. We did a bit of troubleshooting collaboratively where the splash kept appearing at every instance the program refreshed, so we would end up with a bunch of splatters at one point. That was solved with a simple boolean check. So I re-used the same code, but added a delay for the bubble so that there was a short haptic feedback the user can feel before the bubble pops (to simulate the real-world feeling of popping bubblewrap). I also added the radius parameter when creating the splat so different bubbles could have differently-sized splatters when popped — based on the radius of the bubble:
I also added a Reset button that allows users to refresh the environment once all the bubbles have been popped, or simple Reset and get a new set of bubbles in different colours:
The randomize function is something I created to generate new color hex codes every time so the user can keep getting new shades of bubbles. This is also something that can be used later when incorporating palettes or the guided mode:
This is what the environment ended up looking like:
Video:
The other environments:
Reflections:
- Debugging takes a longggg time. We took quite a bit of time to fix the minor synchronization issue for the popping interaction, proving that it’s always the small details that take the longest.
- User testing is NECESSARY. We were not expecting most of the user feedback we received, and the results, while not unsatisfactory, were certainly surprising. We definitely felt the user testing helped improve our iteration since we could keep their feedback in mind when developing ahead, and aim to test this and the last iteration with users as well.
- The haptic design space is huge. Organizing the parameters to create the affinity diagram was a long, and at times, arduous task. We were able to list parameters comparatively easily, but organizing them was quite complicated.
- Scoping down is necessary sometimes. Starting out with multiple mediums, we narrowed it down to one medium of art+exploration. From wanting to integrate multiple modes like free-hand and guided, we decided to focus first on a momentary passive user mode. Scoping down our design goals helped us work more efficiently since we had a much more achievable and concrete milestone to reach for this iteration. We discarded some ideas like a gaming/musical medium, and other modes will be developed in the next iteration.
- With three working Haplys this time, it became easier to develop independently. Testing collaboratively was fun however, since by asking team members to test iterations of our development we were able to provide feedback and ideas on how to improve the interactions/environment.
- Audio/visual feedback makes a world of difference. When comparing the end sensations we have now as opposed to what we had at the end of iteration 1, there is a considerable difference in how pleasant/interactive the experience is. The haptic feedback is still primary for the interaction, but the audio and visual aspects make it much more pleasant, giving it some much-needed context.
Contribution:
Collaboratively:
1. Fixing interaction from iteration 1
2. Developing user test questionnaire
3. Brainstorming to scope down on medium
4. Exploring and parameterizing the design space
5. Brainstorming on various environments/modes we could create
6. Testing environment iterations
Independently:
1. Sketching potential environments
2. Developing Bubble Pop environment
What’s next?
- Test the second iteration output with users.
- Develop a guided mode where users walk through a set of interactions in a pre-determined format.
- Develop a free-hand mode where users can draw freely on the canvas using various palettes/brush strokes.
- Allow the art to be stored, possibly also store user’s movement and feedback to allow for replay of haptic feedback.
- Test the ‘final’ application with users.
- Work on the final report.