A few weeks back, Booking.com left some of its customers puzzled by advertising the same New York hotel as both a three-star and four-star establishment. When quizzed, Booking.com explained the difference was part of a test to see if the number of stars above the door affected bookings and guest experiences.
The experiment drew attention. For many that was because it appeared that the higher star rating carried with it a higher price, despite being for rooms at the same property. (The hotel said this was a mistake that was later rectified.)
But for us it was another reminder of why Booking.com, Expedia and the other OTAs consistently do so well online; they are constantly testing ideas that might improve user experience, drive sales or both. For them, it’s clearly a winning approach, even if Booking.com got some negative coverage in this particular case.
If you want to perform strongly online, testing ideas on your website before full implementation is considered best practice and a good way to keep up with the OTAs. It’s something we try to do with our clients to make sure the changes they want to make are truly the best course of action.
Of course Booking.com and the like have access to huge amounts of data, which makes testing that much easier. Conducting such studies is not an option for every hotel, simply because tests like these require a certain amount of traffic to be meaningful.
But for those in a position to give it a go, it’s essential to set up tests in the right way. Here’s a few of the rules we live by when it comes to the traditional A/B test — comparing two versions of a web page to see which one performs better.
Make sure your sample size is big enough
Tests must have statistical significance. In ours, we need to see a decent number of users, sessions and conversions to be able to draw reliable conclusions. If your website doesn’t get a huge amount of traffic, that might mean running a longer test.
Don’t interfere with the test
An A/B test is all about comparing two variants. If any other changes are introduced that will bias the results. Sounds obvious, but it’s so important.
Aim for 100% of traffic
The greater the share of traffic, the more informative the test. If you can, conduct an A/B test on all the traffic coming to your website.
Run a test for a full booking cycle
You’ll only know if what you’re testing affects bookings if you give your customers enough time to book! From Google Analytics we know that, for most hotels, over 95% of guests take more than 21 days from their first website visit to complete their booking. So a test should run for at least 21 days, plus seven additional days, to include a full booking cycle. On top of that, if you have any special events like a public holiday within the cycle it will invalidate the test.
Running a test in peak season is unlikely to produce typical results.
We use these as guidelines, but we know that hotels, their websites and the things they want to test tend to be very different. Like the OTAs, we put a huge amount of thought into each of our experiments to make sure we’re going to get meaningful results and because we believe they could ultimately make a massive difference.
If you’d like to learn more about our approach, please get in touch.
Bonus — the Direct Booking Summit
Join hundreds of hoteliers at our Direct Booking Summit series in New York and Barcelona. The event will mark the opening of a new chapter of the Direct Booking Movement: ensuring that true hospitality begins on the website. Check our agenda or register for a ticket below.