Published 25 September 2011
The headline reads ‘Trip Advisor faces ASA investigation after review complaints’ (Guardian).
The crux of the story is that a company is arguing that Trip Advisor can’t make the statements that its reviews are ‘trusted’, ‘real’ and ‘honest’ because, they claim, ‘TripAdvisor does not verify any of the 50m reviews on its network of websites… and therefore they are misleading and cannot be described as genuine.’ As a result, Trip Advisor may have to prove its reviews are real and fair, or remove these claims from their site.
Some might say that this is just a PR exercise by a minor player in the social media world (even after a fair amount of publicity, the firm still only has 686 twitter followers (that’s 981 less than @BigAppleHotDogs – a hot dog cart in London!), but the spat does actually raise a bigger and more important question: does aggregation actually offer us any value?
The Guardian recently reported on a study by Kohei Kawamura which claims that there is a strong incentive for people to express extreme opinions in large-scale studies. It claims that, when there are many reviewers, each reviewer has only a small influence on the overall rating, so the temptation is to write extreme reviews in order to be heard. Kawamura therefore suggests that, when there are a large number of reviews, we should discount extreme reviews. However, when a reviewer is asked to give a less elaborate ‘binary’ response (such as “yes” or “no”), the aggregated response is much more credible.
In the wine world, companies are now courting customer opinion in a variety of ways. Taking a look, for example, at Laithwaites.co.uk and nakedwines.com, we can see that both ask for customer feedback on individual wines and both report back on the popularity of wines. Laithwaites reviews ask for an ‘overall rating’ to be given out of 5 stars and then aggregates these scores to give an overall rating. Although Naked also asks customers for a rating out of 5 on each review, its main stat on whether a wine is popular (i.e. what percentage of customers would buy the wine again) is based on a simple “yes/no” option. So, well done Naked Wines: it appears that your binary method to aggregate your customer feedback is the more accurate method. 5 out of 5 (oops, there I go again, giving extreme points!).
Finally, back to Trip Advisor. Although I am not suggesting Mr Kawamura is in any way wrong, I suspect that extreme voting is also partly down to the motivation people have to leave reviews. I know that the few reviews I have left on Trip Advisor and FourSquare have been because I was particularly happy with the hotel/restaurant. Equally, I’m sure people are also highly motivated to leave reviews when they have been subjected to a very disappointing experience. How many people would make the effort to record an acceptable but average experience? What is the motivation to leave a review that awards 2.5 out of 5?
I like Trip Advisor. I use it to find hotels in places I don’t know very well. Combined with other feedback (I usually send out a few tweets to get my friends’ input), the reviews are helpful as you can weigh up the good against the bad. It’s not perfect, but it’s better than going in blind.
So, Trip Advisor, please feel free to contact me (DM or email) and I will happily verify that the 3 reviews on your site under my name are genuine reviews that I left as a result of genuine experiences – or should I say “real hotel reviews you can trust”. I’m sure others will happily do the same. However, you might want to think about changing your voting system to a simple binary system of “liked” and “did not like”. We wouldn’t want anyone claiming that your reviews aren’t accurate.
The thing that is going to take reviews to the next level is being able to filter down to your real friends views.
This is really powerful on Trip Advisor – it is much more persuasive to hear that “your friend Dave rated this restaurant 5/5” than “27% of 796876987 strangers did”
Is yes or no enough? would not an OK in the middle help?
“It’s Ok” is average but as rowbags says, recommendations from people you know and trust personally are way better…
@rowbags, you’re right, if you have a similar palate to your friends’. And this, in my opinion, is the problem with wine reviews in general: my taste is NOT like your’s, or Dave’s, or Derek’s, or Tim’s, or … And that’s why we have an issue with Parker; one man’s taste dictating what so many wines SHOULD taste like – and wineries changing their wines to suit Parker’s taste. So, whether aggregated from binary options or otherwise, I have an issue; however, like most people, I still look at the points/percentages a wine receives.
Andrew, yes, I’ve thought that before too. Maybe a cancel button or a “neither liked nor disliked” option just so it doesn’t keep asking you te question. I’ve bought some and given them away: how should I vote for those?
Nice piece Dave. I think there is a lot of merit to exclude the statistical outliers and this seems to validate that approach.
Thanks Charles. I certainly don’t think voting is going to disappear soon; people love it too much.
Interesting article on tnooz (http://goo.gl/xxERF) by Rod Cuthbert: “Why the social travel model will never fully work”. He explains that taking advice on where to holiday from your friends is dangerous because they are likely to have different tastes from you. Same goes for wine. as Cuthbert says, you need to do some research yourself (which is also part of the fun).