There's been a lot of discussion about the ongoing fight for Web supremacy between Google and Facebook but, to date, the debate has centered around matters like privacy and metrics like page views and ad dollars.
In the past week, however, it appears both companies are taking direct aim at the heart of the other's core business.
First, AllFacebook.com reported that Facebook launched an "open graph search engine." Then, Kevin Rose sent the blogosphere into, well, the stratosphere with a tweet suggesting that a new Google social network called "Google Me" is imminent. (Great timing as I just launched a line of Google Me t-shirts on KosherHam.com.)
In today's column, I'll dissect Facebook's search aspirations and, in my next column, I'll examine the potential for (another) new Google social network.
Let's start with a history lesson.
The Missing Link
Google's PageRank methodology for crawling and ranking Web sites is based largely on linking relationships among them. As noted by Larry Page and Sergey Brin in the 1998 Stanford research paper that outlined PageRank, Google "makes use of the link structure of the Web to calculate a quality ranking for each Web page."
Later in the paper, Page and Brin explained that links are the Internet version of academic citation. A link from one Web site to another was a Webmaster's way of saying, "Here's a site that's worth visiting." In aggregate, all the links pointing to a Web site were a great measure of its worth, especially when considering the authority of the domains providing said links.
As Page and Brin put it, "Intuitively, pages that are well cited from many places around the Web are worth looking at. Also, pages that have perhaps only one citation from something like the Yahoo! homepage are also generally worth looking at. If a page was not high quality, or was a broken link, it is quite likely that Yahoo!'s homepage would not link to it. PageRank handles both these cases and everything in between by recursively propagating weights through the link structure of the Web."
All told, per Page and Brin, "The citation (link) graph of the Web is an important resource that has largely gone unused in existing Web search engines." After all, back in 1998, search engines of the day relied more on human editors judging quality or computer crawlers scanning copy on Web pages for keywords that matched the query.
Another critical component of PageRank was that it allowed Google to scale. As Page and Brin observed, "High quality human maintained indices such as Yahoo!" are "subjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics."
To this day, PageRank has stood the test of time and, thanks to frequent refinement from Google's Webspam team led by Matt Cutts, remains the biggest reason Google has the most relevant results and dominant global market share.
However, the days of links reigning supreme might be numbered.
Power to the People
In the PageRank model, Webmasters quite literally have all the juice. The links they include (or don't) on their sites have a big hand in how Google indexes and ranks the Web.
This made sense when there was no other segment of the Internet population sharing "citations" or other signals of authority. But, today, of course, the 400 million active Facebok users provide such signals every time they "like" a Web site.
Given that the majority of Web users more closely match the profile of an average Facebook user than a Webmaster, perhaps the "like" citations ought to be worth more than the "link" citations.
You Like Da Juice?
So, could "likes" trump links? Perhaps from a micro-relevancy standpoint, but what about scale? How many "likes" would it take in aggregate to give Facebook a good indication of quality? (And how many "likes" would it take to get to the center of a Toostie Pop?)
There's also the matter of comprehensiveness to consider. It's quite unlikely that every single Web site in the world would be "liked." But what if it became part of standard protocol for every Webmaster to "like" their site at launch? This would signal to Facebook that a new site is available and ready to be indexed. I can just see the "like" farms revving up in Costa Rica and India.
Maybe the short-term solution for Facebook and/or its search partner, Microsoft, is to offer "like" rankings as a toggle on top of current Bing search results. Rather than change the entire algorithm, just present a refinement tool to organize listings based on Facebook "likes."
And let's go one step deeper and allow for drill-downs mirroring Facebook audience segments -- for example, allowing you to sort listings based on "likes" of men aged 25-54 in New York, or women 35-64 with at least 2 kids, or adults 18-34 who "like" Pearl Jam.
Siri-ously, Does It Even Matter?
While presentation of Web pages based on "likes" instead of and/or in addition to links would be a step in the right direction towards increased relevance, I'm not sure it would propel Facebook, Bing or any other entity ahead of Google in search. The Google habit is too well formed, and incremental improvements in relevance are not enough to get people to switch. As Gord Hotchkiss once said, it's going to take an iPhone to change the paradigm of search and give Google a run for its money.
As I've discussed in recentcolumns, regardless of how the Web is indexed, the future of search lies in applications (or, as Gord likes to call them, app-sisstants) like Siri (which now just-so-happens to have been scooped up recently by Apple) that return actions instead of results.
That said, there will always be a need for finding Web sites, and it will be interesting to see if links or "likes" win out as the preferred method for ranking. One has to wonder if Google realizes the power of the "like" and that's why it's so gung-ho on cracking the social network nut.
Might be time to design some "Like Me" shirts!