We do a lot of A/B testing with Optimizely and Visual Website Optimizer. And SEO is our most important traffic driver. For most tests, we only change elements within the page, but recently we did a bigger test where we created a whole different page and had to redirect half of the traffic to the “B” variation. This variation was recognizable in the URL with an extra query string ?v=2. We discussed the implications for our SEO, because of course it’s not good to have several URL’s that have the same content. But, we thought Optimizely would handle this issue, as they also state in their knowledge base:

2. Specifying an alternative page to redirect a percentage of your traffic to. [...] is safe as well, although the paranoid tester may wish to take further precautionary measures. Here's a great blog post describing some of the steps you can take to make absolutely sure that your Google ranking remains unaffected: How to Combine Conversion Optimization with SEO -Part 2

But yeah, we should have been "paranoid"...

So, we launched this A/B test for our subject page: the page that lists courses for a certain subject, like assertiviteit (assertiveness) on our Dutch site. And of course, without us linking to the B variation, we suddenly saw it appearing in Google:

So, Google had executed Optimizely's Javascript and 50% of the crawler's visits were redirected to the B variation (which happened to be the "old" page, since we were pretty convinced we would implement the new design).

It has been known for a while that Google parses Javascript to get a more accurate reading of a webpage instead of just dumb-downloading the HTML. For instance, if you set a div with text to display:none with Javascript, Google will ignore this text for ranking purposes. For a very interesting piece on how Google might use Chrome to really learn what you show your users, check out Googlebot is Chrome.

We should've just canonicalized them

The article Optimizely points to, suggests putting your test URLs in your robots.txt to disallow search engines to index them. This make some sense but is a bad idea since it would kill any linkjuice that might flow from those pages. So my advice would be to ignore that article, as it also suggests moving content “that doesn’t convert, but helps your SEO, below the fold in your HTML”, as if Google would not notice that. It also argues that Google will not find your URL if you use Javascript redirects, which is proven to be not true and goes against it’s own advice to put the URL in robots.txt. Huh?

OK, so what we will do with new tests is simply implement canonical URL’s (read Google’s explanation) in them to make sure search engines understand that the original page URL is still the one you want in the index. Of course you should also be careful to change too much content on the page at once, since that might hurt your rankings.

Let me know if you have any questions, share similar experiences if you had the and of course correct me if I’m wrong ;)