The owner of a hotel in the U.K. has started defamation proceedings against TripAdvisor, requesting the site disclose how it came to determine that the listing of its site warranted a “red flag.” TripAdvisor uses these to mark when it believes a hotel has interfered with traveler reviews.
This is not the first complaint the site has received in this country, according to Brand Republic, which describes the proceedings in this article. In August, the Advertising Standards Authority confirmed to the paper it was investigating TripAdvisor following a complaint from there was a high level of defamatory comments hosted on the site.
This episode is illustrative, of course, of the legal risks involved in global marketing and e-commerce — defamation and freedom of speech are legislated quite differently in the U.K. and U.S.
The suit is also emblematic of how seriously hotels and the travel industry in general take user-generated reviews.
New Algorithms Might Change This
TripAdvisor is not unaware of this issue. At the same time, it also has good reason to guard against aggressive hotels that may try to game the review system. A study [PDF] recently published by a team of Cornell University researchers might introduce a certain measure of transparency. They created an algorithm for detecting fake reviewers.
According to the New York Times they have been approached by such companies as Amazon, Hilton, TripAdvisor.
Cornell University researcher Jeff Hancock gave All Things Considered some tips to look for when trying to determine if a review is fake or not.
Fake reviews are more likely to include references to one’s self, and not provide much on specifics, he said. In an example he provided to NPR, the study determined that an extensive review that began: "I recently stayed at the Hyatt Regency in Chicago for business, but extended my stay through the weekend because i loved it!" was fake. A shorter one, however, that began with: "Staff were friendly, room was well kept.." was real.