Russia Blames a Bad Sensor for Its Failed Rocket Launch

officials held a press conference to reveal that they have determined what caused last month’s Soyuz mid-flight failure. The culprit: a damaged sensor on one of the rocket’s four boosters responsible for stage separation. With the investigation complete, the officials announced that they will move up the date of the next crew launch to the International Space Station.

The investigation has captured international attention because the Soyuz rocket is currently the only vehicle capable of transporting people to and from the ISS. Russian space agency officials confirmed that the faulty sensor, designed to signal stage separation, had caused one of the boosters to improperly separate. This led the first and second stages of the rocket to collide, which then triggered the vehicle’s emergency abort system.

“The launch failure was caused by an abnormal separation of one of the strap-on boosters that hit with its nose the core stage in the fuel tank area,” said Oleg Skorobogatov, deputy director of the Central Research Institute of Machine-Building who led the investigation, in a statement.

Video of the incident, released today by the space agency, shows the accident from the rocket’s point of view. In it, the booster in question strikes the core of the rocket, causing a significant jolt, which triggered the abort. According to officials, the afflicted sensor rod was bent slightly during the assembly of the rocket. To check for any handling errors that might have also affected other rockets, Russian officials said that all assembled Soyuz rockets—and their attached booster pack—will be taken apart and put together anew.


How to split the good from the bad in online reviews and ratings

Image result for How to split the good from the bad in online reviews and ratingsA lot of consumers, when searching online for something to buy, will take a look at an online review or rating for a product. It seems like a great way to get an unfiltered view on quality but research indicates most online reviews are too simple and may misguide consumers.

According to one United States survey, 78.5% of American consumers looked for information online about a product or service, and 34% had posted an online review. A global Nielsen survey found 70% of consumers trust online product reviews and use them in making decisions.

As a result, the average user rating of products has become a significant factor in driving sales across many product categories and industries. The proliferation of online reviews from many consumers sounds like a positive development for consumer welfare but some research shows otherwise.

User ratings and product quality

Consumers use online user ratings because they assume these provide a good indication of product or service quality. For example, you would expect a laptop with an average rating of four out of five stars to be objectively better than a laptop with an average rating of three out of five stars, 100% of the time.

In order to test this assumption, one researcher team put together an impressive dataset comprising of 344,157 ratings for 1,272 products, in 120 product categories. For each product, they obtained objective quality scores from the website Consumer Reports. They also collected data on prices, brand image measures, and two independent sources of resale values in the market for second hand or used goods.

The researchers found that average user ratings correlated poorly with the scores from Consumer Reports. For example, when the difference in average user rating between pairs of products was larger than one star, the item with the higher user rating was rated more favourably by Consumer Reports only about two-thirds of the time.

In other words, if you were comparing a laptop with an average rating of four out of five stars, with another laptop with an average rating of three out of five stars, the first laptop would only be objectively better 65% (not 100%) of the time. This is a far cry from a sure difference in quality. Moreover, the average user ratings did not predict resale value in the used-product marketplace.

The reasons online ratings don’t reflect the real thing

There are several reasons why average user ratings may not predict objective quality measures. User reviews may include a broader range of criteria than those Consumer Reports does, such as subjective aspects of the use experience (like aesthetics, popularity, emotional benefits).

Many reviews are also based on small samples. As any statistics teacher will tell you, all things being equal, the average user rating should be more informative as sample size increases relative to variability. Indeed, in the online rating study, the correlation between average user rating and Consumer Reports scores was higher when the sample size was large. Unfortunately, average user ratings are often based on small samples and high variability.

Online reviews are based on a biased subset of those who actually purchased the product. In general, reviews are left by those that “brag” or “moan” about their product experience, often resulting in a two mode distribution of ratings.

This is where the average does not give a good indication of the true population average. For example, in one comprehensive dataset for a large private label retailer, the percentage of buyers who left a review was just 1.5%. This means that 98.5% of the people eligible to leave a review chose not to do so.

Many groups also now actively seek to manipulate average ratings. This can be done in the form of fake reviews.

For example, businesses (or their agents) may post fictitious favourable reviews for their own products and/or post fictitious negative reviews for the products of their competitors. According to one study, roughly 16% of restaurant reviews on the website Yelp were suspicious or fake.

Websites like and try to mitigate such ingenuity. For example, one of the Ivanka Trump collection’s shoes has an average rating of four and a half out of five stars despite hundreds of (presumably fake) one-star reviews.

What you can actually tell from online reviews

There is a way to use the information from reviews and ratings despite all of these potential pitfalls. First, look for products with a high average user rating, many reviews, and not a lot of variance in the rating scores. Beware placing too much faith in average ratings that are based on few reviews and with high variance in the ratings.

You can also consider online reviews in light of additional sources that provide objective product evaluations, from technical experts. Sources of this kind of information include Consumer Reports, Choice, Consumers Union, Which? and CNET.

Where possible, you can consider employing technology designed to help you navigate the bias in online reviews. Examples include Fakespot and ReviewMeta. For example, ReviewMeta scans all reviews from a product’s online listing page, and then provides an adjusted average rating. This adjusted rating accounts for all sorts of suspicious activities such as a high proportion of reviews from users with unverified purchases.

So, the next time you’re evaluating products online, feel free to start with the average user rating, but be wary of making your final judgement based only on this cue.


Bad News For Online Businesses: Trump May Reverse Net Neutrality Rules

Under the previous administration, the net neutrality debate got heated. And the decision then to keep the rules could be flipped under Trump.President Donald Trump’s new administration has allegedly signed off on a policy approach that could completely remake the Federal Communications Commission and reverse net neutrality rules for online businesses.

Last February, the FCC produced a landmark ruling that declared the Internet was a utility, and that access consequently could not be prioritized to favor certain web content. According to the federal agency, such a prioritization — say for those able to pay a higher fee — would break net neutrality rules.

The need for that decision arose after service providers allegedly sought to pursue so-called Internet “fast lanes” that would have seen huge companies like Comcast and Verizon charge a premium to content providers in exchange for faster distribution. But after months of testimony and arguments from both service providers and critics of the initiative — including small business groups — the FCC ultimately sided with users and against providers.

But according to details leaked to reporters from Donald Trump’s FCC transition team, critics now fear that continuation of net neutrality could be in jeopardy.

The Net Neutrality Debate Heats Back Up

Just days prior to inauguration day, Trump reportedly sat down with Republican lawmakers to discuss the FCC’s future. And an inside source claims the majority proposal produced from that meeting had concluded that “the historical silo-based approach to communications regulation is inapposite to the modern communications ecosystem”, and that the FCC’s functions “are largely duplicative of those of other agencies”.

In diluting the power of the FCC and removing many of its powers, the implication is now that any proposed reshuffle would inherently cancel out last year’s Internet neutrality rules. If that turns out to be the case, small businesses and content providers with low budgets could ultimately suffer comparatively in terms of the speed of service they receive and distribution of their online content.

A firm decision has yet to be made on the issue, and only time will tell how the new administration chooses to address recent precedents as far as  net neutrality is concerned. Yet bearing in mind Trump has tapped Republican FCC member Ajit Pai, a vocal critic of neutrality, to head the agency, commentators now fear a full reversal is all but inevitable.

Trump Photo via Shutterstock


Bad Signal? AT&T Now Has WiFi Calling

AT&T has rolled out WiFi calling for some of its iOS9 plans that include the HD voice feature.

The company also notes that the feature works on several iPhone models (as long as iOS9 is installed as well). Models that can use the WiFi feature are the iPhone 6s, iPhone 6s Plus, iPhone 6 and iPhone 6 Plus.

The AT&T WiFi calling option is automatically employed when traditional cellular network connectivity is poor.

MacRumors outlines a number of the WiFi setting’s shortcomings, as well as the fact that the feature was rolled out later than AT&T said it would be. The site also notes that users can turn on the WiFi feature manually by toggling within the settings app. (Though some forum visitors took issue with that.)

MacRumors reports: “AT&T promised to launch WiFi calling alongside iOS 9, but … announced the feature was delayed due to its inability to get an FCC waiver that would temporarily allow the carrier to forgo offering support options for deaf and hard-of-hearing customers.”

Manually setting the AT&T WiFi calling feature doesn’t seem to work for everyone. One commentor laments: “You don’t get to choose. Apple says that ‘when cellular connectivity is poor’ but they don’t define that.”

Some comments were directed at the long-distance surcharge, which many deemed unfair.

WiFi voice calling is free within the United States, Puerto Rico, and the Virgin Islands. Long distance global voice calls will be charged standard long distance rates, AT&T says.

Some comments that are skeptical of this feature are unclear on AT&T’s wording.

On another MacRumor thread, one person notes  some lack clarity on exactly how the new AT&T WiFi calling feature works, adding: “Well, it’s kind of disappointing though in that WiFi calling only works when ‘cellular connectivity is poor’ — which means what exactly? 1 bar? 2 bars?”

Several noted that the feature isn’t even available on their cellphones when it should be. One person, echoing others, noted: “Same issue for me. The AT&T WiFi calling option just isn’t there. I looked on my wife’s iPhone 6 and found it on hers. Weird thing is, when I tried to turn it on it said that she isn’t authorized and has to call AT&T.”

Image: AT&T