OK, a few of you are gluttons for punishment. You asked me to give you some details as to how I researched and wrote my SEO 2009: Adapt or Die piece. Here are the basics.
Read the Patents
I subscribe to a Google blog search for “Google Patent”, and scour the news that I read.
I also go search the US Patent Office for Google-related patents.
In this case, I saw a patent from 2005 some time ago regarding information retrieval based on historical data. It generated a lot of interest a while back because it showed that Google was starting to use domain age as an indicator of quality. What really got my spidey-sense tingling, though, was this bit of information in the patent:
36. The method of claim 1, wherein the one or more types of history data includes information relating to user behavior associated with documents; and wherein the generating a score includes: determining user behavior associated with the document, and scoring the document based, at least in part, on the user behavior associated with the document.
37. The method of claim 36, wherein the user behavior relates to at least one of a number of times that the document is selected within a set of search results and an amount of time that one or more users spend accessing the document…
…47. The method of claim 45, wherein the scoring the document includes: analyzing the user maintained or generated data over time to identify at least one of trends to add or remove the document, a rate at which the document is added to or removed from the user maintained or generated data, and whether the document is added to, deleted from, or accessed through the user maintained or generated data, and scoring the document based, at least in part, on a result of the analyzing.
In English, this means Google’s going to track the percentage of time spent on n pages visited where ‘n’ is the number of documents they click from the Google search results. It also means Google is looking at bookmarking, social voting, SearchWiki, Google Notebook and who-knows-what-else to get a clue as to which web pages matter and which don’t.
That’s a profound shift from the days when links, site structure and keyword density were all that mattered.
So, the patents seemed to point to a shift towards behavioral ranking. But I graduated from law school with a B- average, so I never trust my interpretation of legal documents. I need a bit more.
Test My Assumptions
This part isn’t exact. Marketing never is. Sorry.
I launched, nearly simultaneously, three test sites. I can’t tell you what they are because I don’t want to wreck my sites’ rankings – they’re legit sites. I just used them as experiments.
Site A was an application that let folks compare how their car performs next to others. It’s nearly uncrawlable, except for a few generic information pages.
Site B was just plain silly. It’s only one page.
Site C was a pure keyword-sniping site. A blog built for one purpose: To get a top ranking for a juicy key phrase. But it’s loaded with great content.
All three sites targeted keywords with nearly-equal competition, both in number of competing pages and apparent level of optimization.
Site C jumped up in the rankings within a few weeks. It was super keyword-relevant. It attracted links, too. But average time on site for my target keyword was under 1 minute, and the bounce rate was over 80%. Site C yo-yo’s up and down in the rankings and has yet to stabilize, in spite of continued writing.
Site B got a ton of links and mentions and generated some buzz. But it had almost zero content. For months it didn’t rank. But it did average over 2 minutes time on site for my target keyword, and a very low bounce rate of 30% for that keyword. After a couple months it gained a top 3 ranking and has stayed there ever since. I haven’t updated the site since.
Site A got lots of traffic for a short time, and has since tailed off to almost nothing. It has, however, maintained a time on site for my target keyword of over 2 minutes, and a low 30% bounce rate. It gained a high ranking very quickly and has stayed there. I’ve not touched the site since I launched it.
Which Site Won?
None of them won. They all finished about even, with decent rankings for target phrases and equal percentage of available traffic.
Which made no sense at all. If you’re banking on the traditional hallmarks of good SEO, site C should’ve won: It had more content, the same link authority and the best keyword targeting.
How did a dinky little one-page site with almost no content (site B) and a few links keep up? The only way it outperformed site C was bounce rate and time on site for my target keyword.
The same held true for site A. It looks awful in every way, except that time on site and bounce rate for my target keyword was nearly 3x better than for site C.
Such as they are:
- Google is weighing site performance and user behavior for specific keywords. Sites with lower bounce rates and higher time on site after search (TOSAS) for a specific keyword gain leverage for that keyword, and have a shot at a high ranking in spite of less content and/or few links. That’s why sites A and B outperformed C for months.
- Content, while important, can’t sustain your high ranking in the absence of visitor behavior that proves site relevance. That’s why site C continues to struggle: People just don’t spend much time there. That doesn’t mean you should stop writing. It means you should write well.
- Google’s been pondering user behavior as a ranking factor since at least 2005. Probably for longer. Their patent application proves that.
So, there you have it. My unscientific seat-of-the-pants test, plus patent analysis. Go take an aspirin and send me your comments in the morning.
PS: I wrote this after eating, in a 5 hour period: Smoked bacon with black-eyed peas, Wonton soup, edamame, shredded sesame beef and then a chocolate souffle. With 3 beers. All because my wife is a terrible influence. Attractive, smart and my better half, but a terrible, terrible influence. It’s amazing I’m still alive, much less writing. So be kind in your critique.