Something should be on everyone's mind this morning in the London ad scene, but I bet it isn't because it's kind of buried in some figures from comScore. The worrisome trend would appear to be that
bots are starting to be targetted at video sites more than display.
It may not be readily apparent, but when comScore looked at its top 100 sites, 89% of those focused on display had less than 5%
of their traffic made up by non-human traffic. However, on sites majoring on video, that proportion dropped to 64%, meaning that nearly a third had medium to high level of non-human traffic levels
(ranging from above 5% to 20%).
It could be a coincidence that there are now more pages out there with video content, and so bots are stumbling across this content. It could also be that video
is being scoured more by spiders, though I'm not sure why more spiders would be crawling video sites more than display portals? Which makes me wonder. There could be an altogether more cynical
explanation. I am not a fraud expert, and would never claim to be, but I know enough to ask the question -- are fraudsters trying to train their bots to look like people who watch videos so they can
be more valuable when they are redirected to the criminal gang's fake sites?
Bot traffic on legitimate sites is there to pick up cookies and look like they are real people, so when they are
redirected back to a fake site, an unwitting brand can see that they are exactly the type of person they want to attract. They appear to be interested in all the associated topics and so are worth
bidding for when that visit is offered through an ad exchange. Of course, the person is actually a bot with a pile of cookies and the ad the brand bids will never be seen because the site is fake --
and even if someone were on it, the ads would either be one pixel wide or lined up one behind the other on a window hosting multiple pieces of creative all at the same time.
So it's easy to
imagine that someone who has a track record of looking at videos on a related subject would be highly prized by a brand -- in fact, I recently heard of some CPMs, -- or rather cost per thousand views
-- that were, to my mind, astronomical. It underscored exactly how high publisher's expectations are placed when they have video display ad units on offer or, even better, native video slots
available.
Training a bot to look like an interesting, normal person is one thing, and to make them appear to be someone who goes around watching videos is quite another. It potentially opens
up the industry's most valuable assets to fraud.
Put it this way. If you were a fraudster you'd want your bots to do exactly what's required to appear to be worth targetting by the brands with
the deepest pockets through the internet's most expensive advertising positions.
I'm sure this is already happening and will not be news to those heavily involved in the battle against fraud
-- but for me, those figures today sound a warning bell that cyber criminals are not lazily just sitting back and counting the money they can con out of the industry, they're actually taking their
"craft" to the next level. The drop in the number of sites that have low non-human traffic from nine in ten display sites to just under two in three video sites is significant enough to ask the
question.
After making fortunes from display, are fraudsters gearing up to top off their slice of the huge growth in demand for premium video inventory, and viewers? I can't imagine a scenario
in which this wouldn't seem a likely consequence of video advertising taking off so aggressively.