CHEQ, a cybersecurity company, discovered that about 12% of website traffic from Twitter to its approximately 15,000 clients is non-human.
CHEQ's analysis of Twitter bot traffic -- looking at 5.2 million site visits from May 2021 through May 2022 -- found that nearly one-in-three site visits from Twitter in China is likely fake, and 29% of organic Twitter traffic from China was invalid. Similarly, high rates were found in Taiwan, 25%, and Hong Kong, 32%.
“We don’t have any access to Twitter data, but through our technology we can deterministically say that about 12% of the web traffic from Twitter to our customers’ sites is not human,” said Guy Tytunovich, CEO at Cheq Technologies.
“We see traffic come into our customers’ websites and can tell where they originate, such as Google search, CNN or Twitter,” he said.
Twitter bots are not new, but Tesla CEO Elon Musk brought the problem to center stage when he planned to acquire the company.
Tytunovich describes the company as “an innocent bystander that sits on many website pages.”
The technology helps to identify many interesting trends, such as the bot traffic on Twitter.
Peiter Zatko, the former head of security at Twitter, alleged in a whistleblower complaint made public last month that he uncovered "extreme, egregious deficiencies" surrounding user privacy, security and content moderation.
He recently testified during a Senate Judiciary Committee hearing that Twitter’s cybersecurity “failures make it vulnerable to exploitation, causing real harm to real people."
Twitter hired Zatko, also known by his hacker-name, Mudge, in 2020 to lead security at Twitter after hackers took over high-profile verified accounts. Twitter fired him in January.
Similarweb research suggests bots represent less than 5% of users, but also indicates between 21% to 29% of Twitter’s monetizable content comes from bots.
Twitter management says that only 5% of monetizable user accounts on its social networking service are controlled by bots. Musk claims that up to 20% of the accounts are bots.
CHEQ frequently studies bot activity originating from organic and paid traffic sources on the web to determine the validity of each user.
In a recent study, the company’s analysts looked at the degree to which critical business metrics are skewed, starting by a general study across industries on the activity on more than 50,000 websites and deployed more than 2,000 cybersecurity challenges on each website visitor to ultimately reveal whether the traffic came from a bot, bad actor, or a legitimate human user.
Data was measured against several key metrics for each site, as well as how those metrics would change when removing the fake traffic. The report infers athe business impacts for those numbers.
The report delves into unique site visits, page views, bounce rates, and more, and explains that when marketers look at the data from these metrics, the outcome is skewed based on bot and fraudulent activity.
For example, if fake traffic were removed, page views would increase by 7.4%. Bots and fake users were found to have a lower page view-per-session count overall than legitimate humans, showing that real human users typically browse more site pages than bots. If site page views per session are unusually low, it could be an indicator or increased fake traffic presence, according to the data.
CHEQ also looked at the data by industry to see how much average page views per site visit increased when fake traffic was removed. The average number of page views per site visit overall was initially 2.74. When the fake traffic was removed, this increased to 2.94 page views -- for a total 7.4% increase in page views. The hospitality, medical, advertising and marketing and gaming industries saw the sharpest differences between their initial page views and how they increased when bots and fake users were removed.
All regions, with the exception of Africa, saw significant increases in session duration with the removal of bots and fake users. This suggests that bots in most regions move around websites much more quickly than legitimate human users.
“Everything about my job is mind-boggling in terms of the data,” Tytunovich says. “Everything we see is counterintuitive.”
For instance, he says, small businesses, there is more general fraudulent activity -- not necessarily from bots, but about the same as what is seen in large enterprises. Governmental fraud, non-human behavior -- for example, when Chinese bots attempt to change public opinion in the U.S. -- is simpler and easier to detect than instances of click fraud.
The company attempts to predict challenges before they occur, rather than pursue them. “I got the idea from working in the Israeli military,” he said. “We have a Red Team, the attackers, CHEQ engineers trying to attack our own defenses, the Blue Team, to find flaws and holes in our technology.”