CanHaptics Lab 3: Communicating something with Haply
Create a small vocabulary of communicative movements with three “words”
The communication should be physical-only, no audio or graphical feedback.
Use one sketch (i.e. physical model / environment / piece of code) to communicate all three terms
For this last lab, I paired up with Kattie, and we came up with 5 words (although it said 3 but we were able to do these additional implementations that added to the lab so went ahead with them) to describe the travel experiences of Dina.
Source code here: https://github.com/unmadesai/CanHaptics/tree/main/lab3/sketch_4_Wall_Physics
Dina is an astronaut and will be traveling through space portals in her quest to find a planet with dinosaurs still alive.
We decided to create a set of ‘portals’ each of which would embody one of the ‘words’.
The words we decided upon were:
1) Danger
2) Stuck
3) Fast
4) Slow
5) Turbulence
First iteration: 3 emotions, some visual feedback:
We were initially going to proceed with emotions, in context of the valence-arousal scale, but then couldn’t find actions to really represent the words — angry/excited/nervous all seemed to be best shown through vibrotactile feedback, but there were very minute differences between those (as also we have learnt through the course readings) and the additional aspect of having no visual/audio feedback made it tougher to discern between these words.
We still decided to give it a try, by selecting this set of emotions:
1. Angry/excited
2. Nervous/Anxious
3. Calm/Relaxed
We created an environment with three regions, one for each emotion:
The red portal one was supposed to show angry/excited, the sea-green portal 2 was supposed to show nervous/anxious, and the neon-green portal 3 was to show calm/relaxed.
To implement this, we created high frequency and amplitude vibrations in the region of portal 1, lower amplitude but high frequency vibrations in the region of portal 2, and increased the damping of the end-effector in the region of portal 3. By doing so, the haply would vibrate strongly when in portal 1, as if angry or excited, vibrate with lower amplitude but still high frequency when in portal 2, like it was nervous, and finally just move slower and smoother than usual in portal 3, like it was calm and relaxed.
We hid the portals so all the user could see was:
This way, we hoped the user could just focus on the haptic experience for each portal. However, we did not factor in that users would still be able to see the Haply end-effector avatar, in this case, the dinosaur, move. This visual feedback proved to be incredibly helpful to our test users in identifying the words they wanted to give.
Yet, when we tested this out with people, we got the following responses:
As expected, the distinction between angry and nervous was not easy to make, and we saw users getting confused between the two.
Users also mentioned how they found it difficult to come up with words for the emotions.
Second iteration: 3 emotions, no visual feedback
We then decided to take away the visual aspect. By creating additional walls and hiding the avatar on contact, we were able to make the avatar disappear inside any of the portal regions.
This led to a much more ‘haptics-centered’, only physical feedback as compared to our previous iteration. The GUI still looked the same, only now the avatar would disappear within the portals. Since we kept this method for the final iteration as well, we just took the one final video, attached later.
We tested this out as well, and unsurprisingly, the response was more varied:
Since the avatar now disappeared, the user could not tell where/when they were inside the portal, and only felt the haptic experience, leading to words like ‘terror’ and ‘suspense’. We were also surprised to see the low-frequency movement be captured as ‘relief/safe’.
Third Iteration: 5 movement-based words, no visual feedback
Given the uncertainty and limited vocabulary for emotions, we switched to words to describe the ‘travel’ or the movement within each ‘portal’, each of which is actually a wall/set of walls that have certain properties set on them.
This is what the environment actually looks like, if we make the walls visible:
We kept the initial three movements, but now associated movement-based words instead of emotions.
Thus,
1. Angry/excited → Danger (Portal 1)
2. Nervous/Anxious → Turbulence (Portal 5)
3. Calm/Relaxed → Slow (Portal 4)
We also added two more words:
4. Fast (by adding a velocity component, Portal 3)
5. Stuck (by adding a static timer component, Portal 2)
Portal 1: Danger
For portal 1, we created two walls, at a small distance from each other, and on contact with either wall, we make the end-effector shift to the opposite side. By making the shift distance long enough for it to touch the other wall, we create an infinite loop where the end-effector just keeps moving from one end to another, at the refresh rate of 1kHz, leading to high-frequency, high-amplitude horizontal movements.
We hoped this would simulate the feeling of danger or trouble. This is also what we had used to show anger/excitement in our previous iteration. The below videos were captured with visual feedback, to show the movements and aid in a better understanding of the haptic experience for the reader, but the avatar was hidden within the portal regions when testing — the final video shows the hidden version.
Portal 2: Stuck
For this region, we wanted to fix the end-effector in one place, making it look like Dina was stuck in the portal. We tried using Damping, setting it very high, but that still allowed for movement as opposed to completely stopping it. We then thought of creating an enclosed space where the avatar could be trapped, but that would mean the user would have to either move into that very specific spot or be directed there; we wanted the ‘sticking’ to happen more naturally. So we decided to make the end-effector static on touching the wall that marked the portal 2 region.
However, this raised the question of how does the user free themselves once stuck? That’s when we decided to add a timer — the user now gets stuck for 8 seconds where they can try to free themselves, and then the program itself redirects the end-effector to another spot in space, freeing them, allowing them to come back and get stuck again if they touch the portal wall again. This also kind of resembled traveling between portals — entering one but exiting another so we loved it :) (even though the visual feedback wasn’t really supposed to be there)
Portal 3: Fast
For the third portal, we added a velocity component that displaces the avatar on the Y-axis, so it feels like you’re moving very fast from one point to another. We chose the Y-axis as on the X-axis the speed wasn’t as distinguishable.
So when the user touches the wall, the end-effector moves vertically downwards at 70 pixels per second to a point where it is no longer touching the wall, and then it stops.
Portal 4: Slow
For portal 4, we wanted to slow down movement, and so added damping to the end-effector, which reduces the translational velocity of the body. This is also what we used to show calm/relaxed in our earlier iteration. Thus, for the entirety of the time that the end-effector is in the wall-space of portal 4, its speed is reduced drastically, leading to a slower, smoother, calmer motion. On exiting the boundaries of this wall, the avatar can again move normally.
Portal 5: Turbulence
For portal 5, we wanted to simulate turbulence, for which we took inspiration from the lab 2 Hello Wall experiment. It was also what we used to show nervousness/anxiety in our previous iteration. We experimented with damping and density, to get a low-amplitude, high-frequency motion, but couldn’t achieve it. Hence, we then used the Hello Wall code which had the kWall constant which can be manipulated for vibrations. We set the kWall constant to 200, which created the vibrations we needed to simulate a turbulent or bouncy feel when traversing along that portal region.
Finally, to hide the visual element, we created a set of ‘super-walls’ that enclosed each region and hid the avatar whenever it was within the region.
For the walls which the avatar could enter, like for portal 3 and 4, we simply hid the avatar on contact with those walls.
To create the portal regions, we simply created walls of varying dimensions, and set the background fill to black so they wouldn’t be visible. Based on whether we wanted the avatar to enter the wall we would set the Sensor property so the avatar would either collide with/enter the wall region:
Final Testing Environment:
Final Testing:
We tested our final version with no visual feedback with 2 users, one was a peer in the course and one was a non-haptics individual.
We found that for 3 of our words, users were quite close to our actual words:
Danger, Slow, and Turbulence. For Fast however, users weren’t really accurate, which could have been since the distance covered was less, and so it was a small rapid motion, not allowing them to actually get a sense of the speed. Both users guessed Stuck exactly, as it was also the easiest to guess, with almost no room for misinterpretation.
Reflections:
- Without visual feedback, things instantly turned confusing, allowing much more room for interpretation of the same experience, showcasing the value of other modalities in addition to haptics.
- The haptic interaction needs to be more distinct if there is no other additional form of feedback. Without the visual feedback to distinguish between ‘angry’ and ‘nervous’, even I could not easily distinguish the two, so it was understandable why users got confused. Both did have a distinct audio feedback, caused by the vibrations, which could not be removed, but which could definitely affect their perception.
- As we had kind-of performed a similar activity for our first iteration of the project (Me, Raquel and Sri) where we tried out various haptic interactions, this lab felt like an extension of the same. However, there we were focused on adding the visual/audio feedback to the haptic interaction and analyzing, whereas here we were keen on removing all additional feedback.
- While the assignment was to use only physical feedback, there is still some visual feedback, as in the avatar can still be seen outside of the portals. We kept this on purpose, as we wanted to retain the game-like experience and allow users to associate travel/movement-related words with the haptic experience for each portal.
- While figuring out various means to get haptic interactions, we ended up mainly using Fisica and FBox as they had a lot of inbuilt functions that we could utilize. But we did notice that documentation is still lacking for the Fisica library as compared to most software libraries we have used so far. This is something we learnt through course readings as well — the lack of documentation in general in the haptics space, but we could see that here when many times we kept thinking how to best demonstrate a particular motion but ended up with quite rudimentary ways to show the same.
- Emotions are not as easy to convey as other movement-related words, as demonstrated in both sets of user tests. This could be as movements are usually more standardized whereas emotions are a much more individual-based aspect and hence it was more difficult to get accurate answers with our first iteration.
- We used separate users across all iterations, since repeating users would mean they’d already be biased due to their previous experience; most of them were peers in the course, so thank you to Hannah, Preeti, Raquel and Rubia for testing this out!
Conclusion:
These labs were an incredibly fun way to implement what we learnt. I liked the freedom with each lab to come up with our own ideas, from people going in STEM directions to gamification to more technical stuff. I would have liked to explore more libraries outside of Fisica and the PID implementation; perhaps the project will give an opportunity for that. I also think I wouldn’t have been able to get this output if this lab had been at the earlier deadline — working on our project iteration really gave me considerable insight into thinking of new haptic interactions with the Haply. I have also enjoyed just incorporating small elements of gamification (most of our testers, us included, found Dina very cute :’)). And finally, it’s been great fun working with different partners and individually on these labs.
Dina icon credits: Freepik from www.flaticon.com