SEO Experiment: Let Google Sort it Out
One way to prevent Google from crawling certain pages is to use <meta>
elements in the <head>
section of your web documents. For example, if I want to prevent Google from indexing and archiving a certain page, I would add the following code to the head of my document:
<meta name="googlebot" content="noindex,noarchive" />
I’m no SEO guru, but it is my general understanding that it is possible to manipulate the flow of page rank throughout a site through strategic implementation of <meta>
directives.
After thinking about it, I recently decided to remove the strategic <meta>
directives from my pages here at Perishable Press. This strategy was initially designed to allow indexing and archiving of only the following pages:
- The Home Page
- Single Posts
- Tag Archives
Everything else included <meta>
directives that prevented compliant search engines from indexing and archiving the page. This common SEO formula was implemented over a year ago and the initial results seemed to indicate:
- Reduced number of pages indexed by Google
- Reduced number of pages in the supplemental index
- Slight reduction in overall number of unique visitors
- Improved average visitor duration (lower bounce rate)
These results have been obscured over time, primarily due to the continued growth of the site. For example, the number of indexed pages has increased, but so has the total number of pages available for indexing. Likewise, the number of unique visitors has increased as well, but apparently due to the greater amount of indexed content. Further, I don’t think the supplemental index even exists these days (does it?).
So, in a bit of an experiment that began earlier this year, I removed the following conditional <meta>
code from my theme’s header.php
files:
<?php if(is_home() && (!$paged || $paged == 1) || is_tag() || is_single()) { ?>
<meta name="googlebot" content="index,archive,follow,noodp" />
<meta name="robots" content="all,index,follow" />
<meta name="msnbot" content="all,index,follow" />
<?php } else { ?>
<meta name="googlebot" content="noindex,noarchive,follow,noodp" />
<meta name="robots" content="noindex,follow" />
<meta name="msnbot" content="noindex,follow" />
<?php } ?>
Within a week or so after removing these “page-sculpting” directives, I began to notice somewhat of a predictable pattern begin to emerge:
- Increased number of pages indexed by Google
- Slight increase in overall number of unique visitors
- Slight decrease in average visitor duration
These results were expected and seem to make sense:
- Removing the
noindex
directives allows Google to include more pages in the index - More pages in the index correlates with more unique visitors
- More combinations of page content result in more irrelevant matches, but these visitors leave quickly and thereby increase bounce rate
At this point, I’m not too concerned one way or another about SEO and trying to get a gazillion hits and tons of page rank. Considering that I operate the site for my own benefit and don’t collect any advertising revenue, I would say that I have more than enough traffic to keep me busy.
The point of this exercise was to experiment with a commonly employed SEO strategy. Given the initial results, it looks like I don’t need the conditional meta
directives after all. So for now, rather than fighting it, I think I will just kick back, focus on creating quality content, and let Google take care of itself.
Do you use any similar rank-sculpting techniques? Do they work? It would be great to hear if anyone has experienced a significant statistical impact using similar methods.
13 responses to “SEO Experiment: Let Google Sort it Out”
Hi,
It’s really wonderful post that’s provided meaningful and resourceful information about SEO experiment .
This is really beneficial for my learning experiance and it is a really good read for me.
Thanks for valuable information.