How to split the good from the bad in online reviews and ratings

Image result for How to split the good from the bad in online reviews and ratingsA lot of consumers, when searching online for something to buy, will take a look at an online review or rating for a product. It seems like a great way to get an unfiltered view on quality but research indicates most online reviews are too simple and may misguide consumers.

According to one United States survey, 78.5% of American consumers looked for information online about a product or service, and 34% had posted an online review. A global Nielsen survey found 70% of consumers trust online product reviews and use them in making decisions.

As a result, the average user rating of products has become a significant factor in driving sales across many product categories and industries. The proliferation of online reviews from many consumers sounds like a positive development for consumer welfare but some research shows otherwise.

User ratings and product quality

Consumers use online user ratings because they assume these provide a good indication of product or service quality. For example, you would expect a laptop with an average rating of four out of five stars to be objectively better than a laptop with an average rating of three out of five stars, 100% of the time.

In order to test this assumption, one researcher team put together an impressive dataset comprising of 344,157 ratings for 1,272 products, in 120 product categories. For each product, they obtained objective quality scores from the website Consumer Reports. They also collected data on prices, brand image measures, and two independent sources of resale values in the market for second hand or used goods.

The researchers found that average user ratings correlated poorly with the scores from Consumer Reports. For example, when the difference in average user rating between pairs of products was larger than one star, the item with the higher user rating was rated more favourably by Consumer Reports only about two-thirds of the time.

In other words, if you were comparing a laptop with an average rating of four out of five stars, with another laptop with an average rating of three out of five stars, the first laptop would only be objectively better 65% (not 100%) of the time. This is a far cry from a sure difference in quality. Moreover, the average user ratings did not predict resale value in the used-product marketplace.

The reasons online ratings don’t reflect the real thing

There are several reasons why average user ratings may not predict objective quality measures. User reviews may include a broader range of criteria than those Consumer Reports does, such as subjective aspects of the use experience (like aesthetics, popularity, emotional benefits).

Many reviews are also based on small samples. As any statistics teacher will tell you, all things being equal, the average user rating should be more informative as sample size increases relative to variability. Indeed, in the online rating study, the correlation between average user rating and Consumer Reports scores was higher when the sample size was large. Unfortunately, average user ratings are often based on small samples and high variability.

Online reviews are based on a biased subset of those who actually purchased the product. In general, reviews are left by those that “brag” or “moan” about their product experience, often resulting in a two mode distribution of ratings.

This is where the average does not give a good indication of the true population average. For example, in one comprehensive dataset for a large private label retailer, the percentage of buyers who left a review was just 1.5%. This means that 98.5% of the people eligible to leave a review chose not to do so.

Many groups also now actively seek to manipulate average ratings. This can be done in the form of fake reviews.

For example, businesses (or their agents) may post fictitious favourable reviews for their own products and/or post fictitious negative reviews for the products of their competitors. According to one study, roughly 16% of restaurant reviews on the website Yelp were suspicious or fake.

Websites like and try to mitigate such ingenuity. For example, one of the Ivanka Trump collection’s shoes has an average rating of four and a half out of five stars despite hundreds of (presumably fake) one-star reviews.

What you can actually tell from online reviews

There is a way to use the information from reviews and ratings despite all of these potential pitfalls. First, look for products with a high average user rating, many reviews, and not a lot of variance in the rating scores. Beware placing too much faith in average ratings that are based on few reviews and with high variance in the ratings.

You can also consider online reviews in light of additional sources that provide objective product evaluations, from technical experts. Sources of this kind of information include Consumer Reports, Choice, Consumers Union, Which? and CNET.

Where possible, you can consider employing technology designed to help you navigate the bias in online reviews. Examples include Fakespot and ReviewMeta. For example, ReviewMeta scans all reviews from a product’s online listing page, and then provides an adjusted average rating. This adjusted rating accounts for all sorts of suspicious activities such as a high proportion of reviews from users with unverified purchases.

So, the next time you’re evaluating products online, feel free to start with the average user rating, but be wary of making your final judgement based only on this cue.


Bad News For Online Businesses: Trump May Reverse Net Neutrality Rules

Under the previous administration, the net neutrality debate got heated. And the decision then to keep the rules could be flipped under Trump.President Donald Trump’s new administration has allegedly signed off on a policy approach that could completely remake the Federal Communications Commission and reverse net neutrality rules for online businesses.

Last February, the FCC produced a landmark ruling that declared the Internet was a utility, and that access consequently could not be prioritized to favor certain web content. According to the federal agency, such a prioritization — say for those able to pay a higher fee — would break net neutrality rules.

The need for that decision arose after service providers allegedly sought to pursue so-called Internet “fast lanes” that would have seen huge companies like Comcast and Verizon charge a premium to content providers in exchange for faster distribution. But after months of testimony and arguments from both service providers and critics of the initiative — including small business groups — the FCC ultimately sided with users and against providers.

But according to details leaked to reporters from Donald Trump’s FCC transition team, critics now fear that continuation of net neutrality could be in jeopardy.

The Net Neutrality Debate Heats Back Up

Just days prior to inauguration day, Trump reportedly sat down with Republican lawmakers to discuss the FCC’s future. And an inside source claims the majority proposal produced from that meeting had concluded that “the historical silo-based approach to communications regulation is inapposite to the modern communications ecosystem”, and that the FCC’s functions “are largely duplicative of those of other agencies”.

In diluting the power of the FCC and removing many of its powers, the implication is now that any proposed reshuffle would inherently cancel out last year’s Internet neutrality rules. If that turns out to be the case, small businesses and content providers with low budgets could ultimately suffer comparatively in terms of the speed of service they receive and distribution of their online content.

A firm decision has yet to be made on the issue, and only time will tell how the new administration chooses to address recent precedents as far as  net neutrality is concerned. Yet bearing in mind Trump has tapped Republican FCC member Ajit Pai, a vocal critic of neutrality, to head the agency, commentators now fear a full reversal is all but inevitable.

Trump Photo via Shutterstock


Bad Signal? AT&T Now Has WiFi Calling

AT&T has rolled out WiFi calling for some of its iOS9 plans that include the HD voice feature.

The company also notes that the feature works on several iPhone models (as long as iOS9 is installed as well). Models that can use the WiFi feature are the iPhone 6s, iPhone 6s Plus, iPhone 6 and iPhone 6 Plus.

The AT&T WiFi calling option is automatically employed when traditional cellular network connectivity is poor.

MacRumors outlines a number of the WiFi setting’s shortcomings, as well as the fact that the feature was rolled out later than AT&T said it would be. The site also notes that users can turn on the WiFi feature manually by toggling within the settings app. (Though some forum visitors took issue with that.)

MacRumors reports: “AT&T promised to launch WiFi calling alongside iOS 9, but … announced the feature was delayed due to its inability to get an FCC waiver that would temporarily allow the carrier to forgo offering support options for deaf and hard-of-hearing customers.”

Manually setting the AT&T WiFi calling feature doesn’t seem to work for everyone. One commentor laments: “You don’t get to choose. Apple says that ‘when cellular connectivity is poor’ but they don’t define that.”

Some comments were directed at the long-distance surcharge, which many deemed unfair.

WiFi voice calling is free within the United States, Puerto Rico, and the Virgin Islands. Long distance global voice calls will be charged standard long distance rates, AT&T says.

Some comments that are skeptical of this feature are unclear on AT&T’s wording.

On another MacRumor thread, one person notes  some lack clarity on exactly how the new AT&T WiFi calling feature works, adding: “Well, it’s kind of disappointing though in that WiFi calling only works when ‘cellular connectivity is poor’ — which means what exactly? 1 bar? 2 bars?”

Several noted that the feature isn’t even available on their cellphones when it should be. One person, echoing others, noted: “Same issue for me. The AT&T WiFi calling option just isn’t there. I looked on my wife’s iPhone 6 and found it on hers. Weird thing is, when I tried to turn it on it said that she isn’t authorized and has to call AT&T.”

Image: AT&T


Why Hiring Non-Academics to Teach Entrepreneurship is a Bad Idea

Entrepreneurship education is often left to non-academic types however, most people don't learn well from examples in the absence of a conceptual framework.

These days, hiring non-academic instructors to teach entrepreneurship in graduate and undergraduate programs is a common strategy of university deans. When research faculty fail to get tenure or retire, they are often replaced with people who don’t, and can’t, do research.

This is a big strategic mistake. It contradicts much of what we know about how people learn, leads to negative selection and misses a huge pedagogical opportunity.

But before I make clear why this approach is fundamentally flawed, let me explain why it’s happening. Non-academics generally teach double the number of classes of research faculty — because they are not expected to produce new knowledge — and cost about half of what research faculty cost. The end results are class offerings that cost about one-quarter those of research faculty.

How People Learn

The first problem with the “replace-entrepreneurship-researchers-with-non-academics” approach is that it fails to take into account what decades of research has demonstrated about how people learn. Most people do not learn well by being shown examples in the absence of first being exposed to a conceptual framework. Conceptual frameworks — theories for why and how — provide a mental scaffold for the more fine-grained knowledge of specific contexts.

Because research faculty produce and test theories, they generally offer students these frameworks. By contrast, non-academics, who have not learned how to produce new knowledge, tend to tell “war stories.” Those war stories are often wildly entertaining, but they are generally not very good pedagogy. Studies show that student learning is much higher when research faculty teach students than when non-academics do.

Negative Selection

Most successful people are pretty busy. People who have built successful companies or who have backed those companies financially usually face a pretty high opportunity cost for spending time grading tests, talking to undergraduates about why their “girlfriends ate their homework” or explaining discounted cash flows for the fourth time.

This high opportunity cost means that the people universities can attract to teach six to eight entrepreneurship courses a year at a relatively low salary are generally not the people with the greatest practical expertise in entrepreneurship.

By contrast, teaching in universities appeals to people who want to produce new knowledge, and who have learned the process of producing that knowledge by getting a PhD. I can tell you from experience that such people do not like spending time grading tests, talking to undergraduates about why their “girlfriends ate their homework” or explaining discounted cash flows for the fourth time. We do it because that gives us the opportunity to produce new knowledge. As a result, universities tend to attract the best research-types and the worst non-academic types in entrepreneurship.

Missed Opportunities in Entrepreneurship Education

Hiring non-academics to teach entrepreneurship misses a huge pedagogical opportunity. Technological advance has made it possible for instructors to bring practitioner expertise into the classroom at virtually no cost by using video conferencing technology to connect experts to students in wired classrooms. Combining those practitioner examples with scholarly frameworks that have been developed and honed by the instructor’s research — something that research faculty can provide but non-academic instructors cannot — is very powerful.

Moreover, using practitioners as sources of information, rather than as instructors, provides students with the benefit of specialization. If multiple practitioners speak to a class, each focusing on his or her area of expertise, students receive a level of practitioner knowledge not possible with non-academic instructors. No non-academic instructor teaching entrepreneurship at my university (or any other one that I know of) has a knowledge of how an accelerator works equal to that of Paul Buchheit of Y-Combinator and a knowledge of equity crowdfunding equal to that of Ryan Feit of SeedInvest, both of whom speak to my entrepreneurial finance class about their respective topics.

Scholarly research has taught us that being the low cost producer isn’t always the best strategy, particularly when you are targeting high-end customers. Many university administrators appear to have missed this lesson. Maybe they should sit in on the entrepreneurship classes taught by their research faculty before they replace them all.

Professor Photo via Shutterstock