Published 25 September 2011
The headline reads ‘Trip Advisor faces ASA investigation after review complaints’ (Guardian).
The crux of the story is that a company is arguing that Trip Advisor can’t make the statements that its reviews are ‘trusted’, ‘real’ and ‘honest’ because, they claim, ‘TripAdvisor does not verify any of the 50m reviews on its network of websites… and therefore they are misleading and cannot be described as genuine.’ As a result, Trip Advisor may have to prove its reviews are real and fair, or remove these claims from their site.
Some might say that this is just a PR exercise by a minor player in the social media world (even after a fair amount of publicity, the firm still only has 686 twitter followers (that’s 981 less than @BigAppleHotDogs – a hot dog cart in London!), but the spat does actually raise a bigger and more important question: does aggregation actually offer us any value?
The Guardian recently reported on a study by Kohei Kawamura which claims that there is a strong incentive for people to express extreme opinions in large-scale studies. It claims that, when there are many reviewers, each reviewer has only a small influence on the overall rating, so the temptation is to write extreme reviews in order to be heard. Kawamura therefore suggests that, when there are a large number of reviews, we should discount extreme reviews. However, when a reviewer is asked to give a less elaborate ‘binary’ response (such as “yes” or “no”), the aggregated response is much more credible.
In the wine world, companies are now courting customer opinion in a variety of ways. Taking a look, for example, at Laithwaites.co.uk and nakedwines.com, we can see that both ask for customer feedback on individual wines and both report back on the popularity of wines. Laithwaites reviews ask for an ‘overall rating’ to be given out of 5 stars and then aggregates these scores to give an overall rating. Although Naked also asks customers for a rating out of 5 on each review, its main stat on whether a wine is popular (i.e. what percentage of customers would buy the wine again) is based on a simple “yes/no” option. So, well done Naked Wines: it appears that your binary method to aggregate your customer feedback is the more accurate method. 5 out of 5 (oops, there I go again, giving extreme points!).
Finally, back to Trip Advisor. Although I am not suggesting Mr Kawamura is in any way wrong, I suspect that extreme voting is also partly down to the motivation people have to leave reviews. I know that the few reviews I have left on Trip Advisor and FourSquare have been because I was particularly happy with the hotel/restaurant. Equally, I’m sure people are also highly motivated to leave reviews when they have been subjected to a very disappointing experience. How many people would make the effort to record an acceptable but average experience? What is the motivation to leave a review that awards 2.5 out of 5?
I like Trip Advisor. I use it to find hotels in places I don’t know very well. Combined with other feedback (I usually send out a few tweets to get my friends’ input), the reviews are helpful as you can weigh up the good against the bad. It’s not perfect, but it’s better than going in blind.
So, Trip Advisor, please feel free to contact me (DM or email) and I will happily verify that the 3 reviews on your site under my name are genuine reviews that I left as a result of genuine experiences – or should I say “real hotel reviews you can trust”. I’m sure others will happily do the same. However, you might want to think about changing your voting system to a simple binary system of “liked” and “did not like”. We wouldn’t want anyone claiming that your reviews aren’t accurate.