The eerie internet
Tis the season for creepy apps, freaky robot interactions, and haunting anachronisms
Creepiness creep
What makes a type of technology creepy? If you created a 2x2 scenario, where creepy and cool are at the opposite ends on one axis and humble and presumptuous on the other (as Iāve done many times in conference workshops) youāll see all the usual suspects. Wikipedia is usually one of the few in the humble and cool section. Facebook and facial recognition are squarely in the creepy and presumptuous quadrant. Youād think this would be the worst place to be, with people abandoning in droves.
But that isnāt always the case.
We know that apps collect all sorts of data about us, and that makes us feel uncomfortable. But just how uncomfortable? And what are the effects? In a new study, āStill Creepy After All These Years: The Normalization of Affective Discomfort in App Useā, researchers from the University of Copenhagen proposed three criteria. To be creepy, an app needs to a) violate the boundaries of the user; b) do so unexpectedly; and c) possess ambiguity of threat. High scores in all three categories would amount to one very creepy app.
The researchers presented scenarios about a fictional app, Remember Music, to assess how people would feel. In one, your music would post to Facebook automatically. (Maybe this isnāt so fictional? I see all my āfriendsā music on Spotify still to this day even though Iām pretty sure they made that choice long ago and forgot about it.) In another, participants could control whether the app posted or not. Turns out either way people felt it was creepy and yet it didnāt change their likelihood to use the app.
The researchers didnāt theorize about why this might be, but Iāve got some theories:
You are likely to accept whatever terms and conditions there may be, because the alternative is not using it.
You agree to creepy data collection because those terms and conditions are ubiquitous.
You want to try out a new cool feature, app, or platform, so you agree to it with the intent of changing it later.
You forget about the data the app collects and displays even if you were once aware of it.
And just as humans become habituated to what once gave them joy, the same is true of what once creeped us out. Rather than hedonic adaptation, something like creepiness creep.
So people accept this uneasy feeling as a part of the user experience. You would think that feeling chronically uneasy about products would spur a movement away from them. Among the 751 people who participated in this study, the more digitally literate *laughs self-consciously*, the more likely people were to accept the invasiveness.
Researcher Irina Shkolvski explains:
āIndustry and public bodies will argue that this is a question of personal data hygiene. In other words, that as users become more digitally aware they will favor less intrusive apps over the more intrusive. Based on the data from our study, we can say that trying to shift responsibility to the user in this way will not work.
In conclusion, apps should be designed to be less invasive! Oh, and there should be consumer protections (keep reading š).
šĀ Feels on the Interwebz
Itās not just about the smile, itās about whoāor whatāis smiling! When you see someone smile, itās likely to make you smile too. Itās not just polite social custom. According the somatic marker hypothesis, when we see physical changes that indicate an emotion it activates the hypothalamus in the brain. That brainy bit is linked to social emotions. But does it work the same way when we see a robot smile? fMRIs says nope, in this research published in Frontiers in Psychology. A beaming bot doesnāt hype up the hypothalamus and so a robot smile is not contagious for humans. Try harder robots!
But wait, another study published inĀ Computers in Human Behavior suggests that people felt supported after speaking to a virtual human. Just to be clear, virtual humans are computer-generated characters that respond in human-like ways, using emotional expressions and body language. In the experiment, participants spoke with Julie (a version of Ellie below) about an event that worried them. After the conversation, about 40% of the 115 participants felt better after sharing and felt closer with Julie. I guess we can conclude that the more human it looks, the more we connect. And maybe there is less of an uncanny valley effect with a digital avatar compared to a physical robot?
I promise just one more robots-in-our-feelings story this week. In new research published inĀ The Journal of Sex Research, about half of the 217 doll users surveyed considered their realistic sex doll to be their āideal partnerā and felt emotionally attached. The dolls seem to fill emotional needs for a subset of their users; not just sexual needs. Okay. But there is also a link between problematic attitudes toward women and anthropomorphizing the dolls, according to the study, so also not okay.
It could get stranger though. I mean, think about what we might have to do to get our feelings across in the metaverse.
Have you ever had the uncanny feeling of seeing the tech of this century in the artwork of a previous one? These anachronistic illusions are fairly common, so common that Motherboard has the beginnings of a series called Double Takes on this phenomenon. Call it the tech-time continuum that reinterprets old art through the lens of modern digital anxieties (as one does) or āthe future is here, itās just not evenly distributedā (as William Gibson does). Either way, it seems we project our present worries on the past in weird ways. ĀÆ\(ć)/ĀÆ
The EU has been working on regulations that prevent AI harms and hold companies accountable for harmful AI. Now the US is catching up! Just this week, President Biden released a new AI Bill of Rights which includes 5 citizen protections:
Americans should be protected from unsafe or ineffective systems.
Algorithms should not be discriminatory and systems should be designed in an equitable way.
Citizens should have agency over their data and should be protected from abusive data practices through built-in safeguards
Citizens should also know whenever an automated system is being used on them and understand how it contributes to outcomes.
People should always be able to opt out of AI systems and have access to remedies when there are problems.
Off to a good start. But what about creepiness?
Friday Feeling
This week why not head over to Feels Guide on Medium where Iām collecting all the feels that usually go right here.
š«£š«£š«£š«£š«£
Thatās all the feels for this week!
xoxo
Pamela š