TikTok, Snap, Twitter, Instagram, and lesser-known platforms Omegle, Monkey, Kik, and dozens of other companies are on the list of apps accused of “systematically endangering children online and breaching the UK’s new Children’s Code.”
The research, conducted by 5Rights Foundation, a children’s digital rights charity, was submitted to the Information Commissioner’s Office Friday as part of a complaint written by Baroness Beeban Kidron, the charity’s chair and the member of the House of Lords who initially proposed the Code, according to The Financial Times.
The document alleges that the companies use design tricks and encourage children to share their location or receive personalized advertising, with data-driven features that serve harmful material. The encouragement includes convincing children to give up their privacy. For example, the investigation found that video-chat social-media app Monkey uses pop-up memes to encourage users to give the app access to their location.
advertisement
advertisement
The harmful material or content includes topics such as eating disorders, self-harm and suicide, with insufficient assurance of a child’s age, before allowing inappropriate actions such as video-chatting with strangers.
The UK began enforcing the Age Appropriate Design Code in August after a year of telling companies to get prepared. Breaches carry the same potential penalties as the EU’s General Data Protection Regulation (GDPR), including a fine of up to 4% of global turnover for companies that do not comply.
To conduct the study, researchers registered Android and iPhone devices as belonging to children, ages eight, 13 and 15. They downloaded 16 dating apps including Tindr, Happn, Find Me A Freak, Bumble and others with a minimum age rating of 18 years from theAppleapp store, using an iCloud account registered to a 15-year-old child. They did this by tapping “OK” to confirm being of the required age.
Dozens of apps, with a process similar to that used by online liquor stores in the U.S., did ask for proof of birth dates, but there is no way to prove the parent has reviewed this, with the ticked-box declaration to declare age and accept terms.
The research found that the algorithmic recommendation systems served up harmful material or content that endangered the safety of children by connecting them with adult strangers.
In one example, the complaint said that searching “self-harm” through a child-aged avatar account on Twitter showed profiles of Twitter users sharing images and, in some cases, videos of them cutting themselves. “
Despite going against Twitter’s content policy, searches for #ouchietwt, and #sliceytweet surfaced examples of Twitter users telling others the type of razors to use for self-harming and where to buy them, as well as users who shared extreme diet tips for achieving clinically underweight goal weights.
FT reports that Twitter now blocks handles like #shtwt, and #ouchietw. However, Twitter's claim about its policy is not completely accurate.