Commentary

Walking The (Thin) Line Between SEO And Web Spam

An article that appeared on SEOBook.com last week has me a little rattled. Rank Modifying Spammers is an interesting exploration of Google’s search algorithm and the possible direction the company may take in its fight against Web spam. The author’s hypotheses stem from a Google search patent that was recently discovered by Bill Slawski of SEOByTheSea. The potential implications of that patent, as articulated by SEO Book, are why I’m a little anxious.

The patent illustrates a method for Google to identify “rank-modifying spammers.” Distilling down the lengthier discussions across the two posts referenced above, Google will place search results into a temporary state of limbo when rank modifications are occurring. So as content is in the midst of being reassessed (presumably following SEO attention), the page(s) in question will fluctuate across the results pages prior to settling in new positions. If SEOs observe those fluctuations and make further content refinements before the pages have wholly settled, then Google knows that a “spammer” is behind the changes, and the page and site may be subject to spam-related penalties.

advertisement

advertisement

The patent broadly describes webmasters who attempt to modify their search engine rankings as “spammers.” To me, that seems to implicate all SEOs.

Are SEOs spammers?

In May I wrote a column, “Doesn’t Google Owe SEOs Something?” that I was practically vilified over. My contention then -- as it is today -- is that SEOs have delivered value to Google, Bing, and others. We have studied the methods that search engines employ to discover and score content, and in response have deliberately packaged up our sites for easy consumption. From a white hat SEO’s perspective, we have done our best to make Google’s job easier. Google should be thanking us, right?

The counter argument (one of dozens that I received) that gave me the most pause came from my friend and colleague Abdul Khimani. Abdul has a theory that I hadn’t considered before: that Google’s evolution as a company has been set back several years as a result of it having to funnel energies toward fighting Web spam. He believes that if SEO of any sort (black or white hat) had never existed, Google would be better off today.

It’s a counterpoint that can never be tested, of course --although I think Abdul may be onto something. But isn’t there some degree of inevitability at play too? Anytime a new paradigm emerges and presents economic opportunity, isn’t it inevitable that someone will attempt to exploit that opportunity? In the case of search engines, spammers were bound to happen.

The creation of Webmaster Guidelines seems to have been a move that provided directional counsel to white hat webmasters while helping in the fight against Web spam. Otherwise, I can imagine a scenario where all webmasters would have felt compelled to “spam” in order to compete (sounds like baseball in the steroids era).

Is Google the SEO’s enemy?

This leads me back to the SEO Book article and the patent in question. Does Google genuinely believe that all SEO is spam? The patent language can surely lead one to that conclusion, although the presence of Webmaster Guidelines and Google’s official statements to the contrary seem to rebut that conclusion.

But there really is no room for ambiguity in patent applications, so allow me the brief opportunity to assume that Google doesn’t like SEOs. Whose interests are served when rank manipulation occurs? SEOs want their content to be noticed and positioned as highly as possible for competitive search queries. Search engines want to maximize revenue per search (RPS) via paid advertising, and the quality of the organic index enables that ad delivery.

Do search engines really stand to enhance RPS if all sites are penalized equally when “rank modifying” activities are detected? I don’t believe so, because it could cut too deeply into the quality of the results set.

And Google would be wise to tread carefully here, and focus on recognizing and rewarding the legitimate content producer. Broadly labeling SEO as spam could produce substandard organic results, hurting near-term ad revenues. A longer-term, sustained view of SEO as spam could accelerate a decentralization of search into myriad alternate destinations (e.g., ESPN.com for sports, Facebook for social context, etc.).

To stymie that shift, Google should continue to recognize white hat SEO for what it is: a content marketing best practice, not rank-modifying spam.

3 comments about "Walking The (Thin) Line Between SEO And Web Spam".
Check to receive email when comments are posted.
  1. Rick Noel from eBiz ROI, Inc., August 27, 2012 at 7:59 p.m.

    Nice article Ryan. I think your insight that "search engines want to maximize revenue per search (RPS) via paid advertising, and the quality of the organic index enables that ad delivery" is basic yet profound and is the key to understanding why search engines must attack web spam in order to survive. Why would it seem like heresy to rank the best content first? Wouldn't that motivate publishers to put their best foot forward while stop rewarding those that know how to "jury rig" the system for their own exploits? The only way this systems falls off track is if the best content does not end up on top, which unfortunately, is not always the case.

  2. Reg Charie from DotCom-Productions, August 27, 2012 at 11:14 p.m.

    Google is out to get the folks that want to modify their ranking beyond the organic.

    They have removed low quality links, found and penalized sites for linking profiles outside the organic.

    Changed their algos so that relevance is uppermost.
    Relevance replaces the old PageRank algos that did not take it into consideration

    They have even removed a basic function of a link, the anchor text's predictive function, because it could be manipulated.

    Who is responsible for giving Google the most grief?
    Who is it that turned PageRank into a useless metric?

    The SEO practitioners, is who.

    @Rick,
    I don't think just showing the best content first will achieve what Google wants, which I would assume is to increase relevance while lowering costs.

    Every link Google follows cost them money for processor time and data storage.
    Imagine how much they could save if they did not have to follow useless links.

    Before Panda/Penguin the useless link component just multiplied like Rabbits in Australia.
    PageRank's scale grew out of all proportion.

    The original paper on Google, "Anatomy of a Large-Scale Hypertextual Web Search Engine" states that "Using anchor text efficiently is technically difficult because of the large amounts of data which must be processed. In our current crawl of 24 million pages, we had over 259 million anchors which we indexed. "

    That was then. In today's internet the Indexed Web contains at least 8.2 billion pages (Tuesday, 28 August, 2012).
    How many links is that?

    The new patent is only going to put a pause between implementation and effect.

    It is also going to teach SEOs to get the on-page work done right the first time.

  3. Rick Noel from eBiz ROI, Inc., August 27, 2012 at 11:28 p.m.

    You are right Reg. Following links costs Google resources. According to numbers shared by Google at a recent press event earlier this month (8 Aug 2012), Google currently crawls 20 billion pages a day and has seen 30 trillion URLs. They also touted serving 100 billion searches per month which is over 3 billion searches a day. Ranking well should be about having the best content and addressing needs that currently go unmet, which to me is not web spam and supports a business model that is beneficial first and foremost to the user, then by extension, Google and the publisher. At $2.785 B in Q2 2012 earnings, I think that Google will continue to be able to afford to crawl!

Next story loading loading..