While there has been much discussion recently about Web advertising fraud, and several solutions have emerged to detect the most basic instances of cheating, it's clear that the online ad industry
still dramatically underestimates the problem.
We can and must do more to win this arms race as the perpetrators of fraud user ever-changing and sophisticated methods to steal
advertising dollars. Similar to the emergence of spam that made email almost an unusable tool in the late '90s, advertising fraud online is a massive issue.
ComScore recently estimated that
"50% of Web traffic is non-human and mostly malicious," meaning that over $25 billion in digital ad spend was wasted globally in 2012. As users and advertising migrate rapidly to mobile
platforms, the problem only gets more severe — mobile is a very complex environment with far less standardization than desktop web and more vulnerable than ever to ad fraud.
While it is
a step in the right direction that the ad industry has embraced 'viewability' as a key metric to ensure ad budgets are not wasted, this is just a small step that needs to be taken much further and
continually improved upon. Why? Because viewability is only one metric of hundreds that need to be measured, and the perpetrators of fraud have already developed ways to engineer around
solutions that check for viewability.
Take for example “bots” — computer software designed to specifically mimic online human behavior and programmed to browse, click,
register, and even sometimes makes purchases (with stolen credit card numbers) just as a real human being would. These bots are now so sophisticated that they do take the time to
‘view’ Web sites and ads, fully rendering them in a browser window before clicking or moving on. This type of behavior easily overcomes viewability measurement tools and has
allowed fraudsters to rack up billions in earnings.
The increase in social media activities like registration, voting, commenting and sharing have only made the matter worse as these are all
typical human behavior that are easy for bots to mimic. In fact, the ease with which botnets can mimic real Twitter and other social media accounts unfortunately allows them to easily establish
credibility for their ‘users’. (Do an experiment as you read this article and type in “buy Twitter followers” into Google … how many of those followers do
you think are real and how many are bot accounts?)
Bots typically begin as some form of malware that users "catch" from a site they visited or comes bundled with a free application or video
they download. Millions of users have infected browsers and never know it, unwittingly taking part in botnets. In other cases, bots could in theory be banks of computers that were set up
for the sole purpose of mimicking human browsing behavior. However, actually operating a computer is a more expensive option than is secretly taking over an existing one that someone else is
operating.
Botnet or fraudulent behavior could also come from ‘click farms’ — quite literally human sweatshops overseas where employees are paid or forced to browse Web
sites, click on ads, and register for product offerings. This type of activity easily defeats viewability standards (since humans are actually looking at sites) and even defeats tools that check
for USA-based IP addresses since the fraudsters use VPN’s (virtual private networks) to act from within USA-based addresses.
A comScore vCE study, which measured validated ad campaign
delivery against human audiences, showed that just 2.8% of ads co-occurring with malware processes running on user’s machine were viewable to an actual web user. Moreover, a study last
year found that for very small sites -- those with fewer than 2,500 monthly visitors - 83% of their traffic comes from non-human sources (bad bots and good bots -such as search indexing) with bad bots
accounting for 49 percent of traffic.
What can we do to prevent ad fraud if basic tools like viewability metrics and geographic/IP restrictions don’t work? We have to stop
treating the issue like an advertising problem and realize it’s a cyber-security issue.
Cyber-security companies have been fighting issues with spam, malware, viruses,
and even botnets that target e-commerce sites for years. The key difference here is that, unlike most ad agencies, security companies realize that this is an arms race, a competition that will
never end but always evolve. The digital ad industry has to continually invest in new solutions, both through internal engineering and by using third-party vendors, and realize that as soon as a
new solution is launched there will be a “blackhat” attempting to reverse-engineer and defeat the solution.
Advertising agencies need network security engineers and vendors
just as much as their clients (brand advertisers) like Nike, Ford, and American Express do -- but many do not.
While this may sound like a grim prognostication, it is also realistic and
positive in that we can see a road map to a solution. Once we stop viewing ad fraud as a problem that will go away once the right, single solution is found the sooner we can get on to winning the ad
security arms race.