Brought to me -- and now to you -- by Vinh Giang, proxemics is the study of distances. Right up in your face? That’s “intimate” distance. Close enough to touch? That’s “personal.” The appropriate distance for a colleague? That’s “social.”
Per Vinh, we’re Zooming wrong. Most of us let our face fill the screen -- but that’s way too close! That’s intimate! Why are we trying to be intimate with our co-workers? It doesn’t sit right! We don’t have that kind of relationship!
And it struck me: Right now, AI is trying to be intimate with us, in ways that don’t sit right. We don’t have that kind of relationship.
Take, for example, YouTube’s new generative comment features. Historically, YouTube offered content creators a few canned reply options for comments on videos, like “Hey, thanks!” or “Glad you like them!” Gmail does the same.
advertisement
advertisement
But now YT is on the gen AI enshittification train, and the comment section was the next stop. YouTuber Clint Basinger, who runs LazyGameReviews and LGR Blerbs, shared his back-end experience with AI-generated replies, and let’s just say they… aren’t good.
In one video, he prepares an energy drink that comes in a powder, but can’t find the scoop he’s meant to use. A viewer commented “I wouldn’t be surprised if the scoop was buried in the powder.”
The AI-suggested responses:
Say what now?
Here’s the thing: the canned replies were fine, because they weren’t trying to be anything other than canned replies. They were at social distance, the appropriate distance.
But these new replies are trying to be more intimate than is appropriate. In ways that don’t sit right. We don’t have that kind of relationship.
Basinger told Emanuel Maiberg, “I would not consider using this feature myself, now or in the future,” adding, “My creativity and craft stems completely from my own brain, and handing that off to some machine learning thing that mimics my style not only takes away from the enjoyment of it all for me, but it feels supremely disingenuous. The automated comments in particular come across as tone deaf, since a huge reason YouTube makes sense at all is the communication and relationship between audience and creator.”
Here's the irony: Basinger and other creators have to reply to the comments because the algorithm rewards them when they do and punishes them when they don’t. So the autoreply feature is basically YouTube solving a problem of its own creation.
Maiberg also reported on a new piece of ad tech called Shop This Image. Made by a company called Trendii, it allows you to… well, shop the image, obviously.
Innocuous if the image is in an article called “Red Carpet Looks” or “Celebrities Are Just Like Us”. But Maiberg found it being used in darker ways, like on a Buzzfeed article called “17 Creepy, Disturbing, And Terrifying Things I Learned About This Month That I Really, Really, Really, Really Cannot Keep To Myself.”
A commenter on the Buzzfeed article said, “This man is killing his neighbors by slowly poisoning them with opioids, this is where you can get his outfit. This woman had both her arms and legs amputated from a dog lick, find out how you too can rock this look.”
Hey Trendii: this is not appropriate. It doesn’t sit right. We don’t have that kind of relationship.
Basically what I’m describing is the uncanny valley: where something is too close to be obviously fake but very obviously not real. Where all our alarm bells start ringing because of the dissonance between what you purport to be and what you actually are.
But the bottom line is this: Gen AI has not yet earned the right of intimacy. So for now, I’d rather it keep its distance.