The short answer is yes. We have written extensively on the subject before: SEO Obviousness: Duplicate content sucks and SEO worst practices: The content duplication toilet bowl of death.
So we have done our due diligence warning you about the dangers of duplicate content. But, there is another side to the story. There are instances where we’ve seen clients fear the duplicate content plague so much so that they go to extreme lengths to avoid it, and end up preventing their sites from performing optimally.
Let’s take Portent client www.RealTruck.com for example. This large e-commerce site in the competitive aftermarket truck accessories market had taken several measures to prevent their site from falling prey to the problem of duplicate content. They had implemented:
- nofollow, noindex attributes on hundreds of thousands of pages, including filtered category pages
- nofollow attributes to footer links and other internal links
- canonical tags to several sub-category and product pages that pointed to the main category pages
- canonical tags to categories with multiple pages, instead of rel=”prev” and rel=”next” pagination link elements
The client was worried about pages competing with each other, or cannibalizing each other. For example, a sub-category page such as Front-Mount Snow Plows would compete with the main Snow Plows category page. But in taking these measures, they simply weren’t ranking for long-tail keyword terms like “front-mount snow plows,” or search results for long-tail terms were going to the wrong landing pages.
We recommended that they remove noindex, nofollow attributes on filtered pages and internal links. By opening up the flood gates to sub-category and filtered pages, the main category pages were not losing out; rather, additional keywords were won.
We also recommended removing the canonical tags that were on sub-category and product pages that pointed to main category pages, using correct pagination for categories with multiple pages. With correct pagination, all pages reference the first page. Using the canonical tag, however, tells search engines that these pages are the same and prevents all but the first page from ranking, which is usually not what you want.
Once these recommendations were implemented, their top 20 keyword rankings nearly doubled in less than a year. Organic unique visits increased 102% between August 2013 and August 2014 – and continues to grow. Organic revenue increased 113% during that same time frame.
The Moral of the Story Is…
For RealTruck.com, much of their success comes from the fact that they are dedicated to search engine optimization. When we make a recommendation, their team whole-heartedly commits to implementing it. They’ve spent considerable time ensuring that their pages are unique and do not have duplicate content. Here are some of the additional tactics they’ve taken to make sure every page of their site provides unique value:
- All Meta Titles, Meta Descriptions, and H1 Tags are unique. They use templates within their CMS to vary these across the site, which reduces the risk of duplication at scale.
- All pages, especially main category and important sub-category pages, have at least 200 words of unique content prominently featured above (and below) the fold.
- Boilerplate info was eliminated or consolidated.
- An in-house designer creates unique banner images for important pages.
- Paginated pages now have with rel=”prev” and rel=”next.” Here’s the Google article explaining how to do this.
- For any pages that were true duplicates of each other, those were canonicalized or 301 redirected to the preferred page.
These things improved search engine optimization dramatically and, even more importantly, they provided a better user experience. Each page is not a cookie-cutter duplicate of the last, but rather is crafted and personalized depending on the filters selected.
In case you didn’t already know, Portent won Best SEO Campaign of 2014 at the US Search Awards for this campaign.
How to Tell a Real Duplicate Page from a Fake
Duplicate content is a huge can of worms in itself, but the real problem here was an over-extension of the definition. There’s a difference between real duplicate content and fake duplicate content.
See if you can spot the duplicate page here:
I tried to make it painfully obvious but just in case you didn’t notice, Page B is definitely the duplicate. Printer-friendly versions of pages, by definition, have absolutely no unique content from the original page, and so those should be noindexed or should have a canonical tag pointing to the original page. Page C however, even though it has a similar image and layout as page A, has some unique content, so it should be allowed to stand on its own.
Duplicate Content is Not the Devil
Many people worry about duplicate content, but Google has never said that it would directly cause a penalty.
Google doesn’t treat duplicate content as spam by default. You have to do spammy things with that duplicate content, like set up an auto-generated site based off an RSS feed, in order for Google to take action against your site.
The problem with duplicate content is not that it will get you a slap on the wrist from Google; it’s that it confuses the hell out of the search engines and forces them to choose which one of your duplicate pages is the best to show in search results. And sometimes they’ll choose the wrong one.
If you need help strategizing how best to address duplicate content issues on your site, check out our SEO services.