Commentary

SEO 2.0 And The Pageless Web: The RIA Search Conundrum

Search engines and natural search optimizers are starting to deal with new difficulties in the crawling, indexing and measurement of Web site content.  In the page-based paradigm, these activities have been somewhat straightforward, but challenging questions are beginning to arise as more Webmasters are beginning to employ rich internet applications (RIA) designed fundamentally to improve Internet navigation and user experience.  As the adoption of RIA grows, new strategies will be required of engines, marketers, SEMs and analytics companies in order to reap the mutual benefits of finding, being found, and being counted.

The pageless Web. Implementation of rich Internet applications turns many of the cornerstones of search engine algorithms and SEM strategy on their head.  The major issue at hand is that the searchable Web is based on the crawling and indexing of pages, each with its own unique URL address.  RIA-based designs (ex. Flash, and asynchronous JavaScript and XML, aka AJAX) rely less on the reloading of pages, and also have little or no need at all for unique URLs. 

advertisement

advertisement

Unless specific search strategies are taken into consideration, the gains in user satisfaction will come at the expense of natural search engine performance and returns.  At their core, rich internet applications shield data from search engines, and a story of increasing complexity in data management and measurement is emerging.

The user experience of some RIA-based interfaces is nothing short of stunning when compared to similar page-based experiences.  With benefits such as seamless data delivery, reduction in time for query responses and less need to refocus on freshly loaded pages, it is simple to understand why marketers will inevitably adopt RIA in droves.  To view examples of RIA in action, check out www.Dictionary.hm, Gmail, and Become.com.

Responsibility is on the engines.  Search engines are starting to feel the crunch of the increased adoption of rich interfaces, not only in their ability to find and crawl relevant new content, but also in their own employment of AJAX into various applications that effectively decreases a site’s page views, one of the primary measurements of Web popularity and performance. 

Last week, ComScore made something of a landmark Web analytics statement when it declared MySpace the No. 1 site on the Internet.  The caveat was that this page-based triumph might be an aberration due to Yahoo’s wide implementation of AJAX and RIA into Yahoo Maps and other pageless features. Due to their widespread utilization of Web-based applications and reliance on basic Web crawlability, Google and Yahoo may ultimately have the greatest need to find alternative crawling and measurement solutions.

The responsibility is also on skilled optimizers.At SES in San Jose last summer, I spoke with an Ask engineer about optimization workarounds for rich Web interfaces, and was told that the best option at this point is to build a second mirror site with a unique URL structure for engines to crawl.  The burden for Webmasters is that two complete Web sites must be managed in order to take advantage of search engine visibility benefits.

Matt Cutts, senior engineer at Google, was kind enough to answer a few questions for this article, and concurred with the opinion of building a second mirrored site with unique URL addressing for engines. Cutts also said that ideally, “Webmasters would design sites with users, [accessibility], and search engines in mind. Google does quite a good job on much JavaScript, but complicated AJAX can present issues for any crawler.” 

Cutts also says that rich interfaces are not an immediate threat to Google’s relevancy.  “The vast majority of sites are still built as static Web pages, so we don't foresee a problem at this time. The nice thing is that people building RIA/AJAX sites tend to have a technical skill set, and thus at least consider the impact of search engine crawlability.”

When asked if Sitemaps.org may ever be used as an alternative to creating mirrored sites, Cutts said that “it is a possibility,” but reiterated the need to design for multiple users and search engines.

Based on the efficiencies and improvement in user experience, expect a growing demand for Web-based applications at the enterprise level in the coming months.  But marketers and developers building rich Internet applications should also be aware of the potential for lost natural search benefits, or be prepared to create and maintain full mirror sites to appease crawler-based engines and other user agents. All eyes should continue to look to Yahoo and Google, as their continued employment of RIA and reliance on publisher data will force them to expand the boundaries of Web site crawlability and site measurement.

Next story loading loading..