
The digital rights group Access Now is asking Spotify to promise
to refrain from using a “dangerous” new voice-recognition technology that claims to be able to discern people's emotions.
“This technology is dangerous, a violation of
privacy and other human rights, and should be abandoned,” Access Now writes in a letter sent to Spotify late last week.
The letter comes several months after Spotify obtained a patent for a technology that
purports to analyze users' voices and then suggest music based on users' “emotional state, gender, age, or accent.”
In its letter, Access Now expresses skepticism about the
accuracy of emotion recognition technology, but says that if the technology works, Spotify would be “in a dangerous position of power” over its users.
“Spotify has an
incentive to manipulate a person’s emotions in a way that encourages them to continue listening to content on its platform -- which could look like playing on a person’s depression to keep
them depressed,” the group writes. “A private company should not wield this kind of responsibility over a person’s well-being.”
Access now also argues the technology
could facilitate gender discrimination, and could violate people's privacy.
“You cannot infer gender without discriminating against trans and non-binary people,” the organization
writes. “In addition, if you are categorizing people by their gender, you will create gender filter bubbles based on simplistic, outdated ideas of gender determinism. This means that men will
likely be nudged towards an exaggerated stereotype of 'masculinity,' and women will likely be prodded toward an extreme stereotype of 'femininity.'”
The digital rights organization adds
that if the technology is always in use, it's likely to pick up sensitive information.
“No one wants a machine listening in on their most intimate conversations,” Access Now says.
“This is a serious intrusion into your customers’ personal lives.”