Around the Net

Accidently Blocking Link Juice

There are several ways to block pages from appearing in the index of search engines. Rand Fishkin provides insight into one after noticing several Webs sites seeking to block bot access to pages on domain have been using robots.txt. He calls the practice "fine," but tries to clarify a few misunderstandings about what blocking Google/Yahoo!/MSN/other search bots with robots.txt does.

Fishkin suggests conserving link juice by using nofollow when linking to a URL that is robots.txt disallowed. And, he writes, if disallowed pages have acquired link juice, particularly from external link), consider using meta noindex, follow instead so they can pass link juice to places on the site that need it.

Read the whole story at SEOmozBlog »

Next story loading loading..