Path: EDN Asia >> Design Centre >> Consumer Electronics >> Scientific results evaluation: Systematic bias
Consumer Electronics Share print

Scientific results evaluation: Systematic bias

08 Mar 2016  | Ransom Stephens

Share this page with your friends

Science journalism is such a hit these days, but it comes with much sensationalism. Stories reporting that "everything you ever thought about xxx was wrong" generate the most page-clicks so editors love them. Plus, journalists who genuinely love science often lack the research experience or understanding of statistical analysis necessary to guide their readers. That is, uncertainties in research results are almost never reported. If they were, you could gauge how many grains of salt you need to take with the claims.

In this three-part series, I'll give you the concepts you need to distinguish strong results from weak and to understand why some results seem more conclusive than they really are. With these tools, you can estimate the uncertainties yourself and decide how much to believe the next thing you read from "I love science" or whatever your friends share on Facebook and Twitter.

The spectrum of research results: inconclusive to conclusive
Scientific results cover a spectrum from inconclusive to conclusive. They range from "weak indications of" to "evidence for" to "discovery" or "confirmation." It's not a spectrum of bad science to good. As humanity writes our book of knowledge, inconclusive results are just as important as conclusive results—"the worst data are better than the best theory," said Antonio Ereditato—inconclusive results just aren't fascinating to casual observers. As for bad science, let's assume the goodwill of researchers and worry about fraud some other time.

Misunderstandings arise when conclusions are inflated. You see it all the time,

 • Your opinions conform to those of the people you hang out with
 • Fish oil pills reduce the effects of schizophrenia.
It's not that the claims are wrong, just that the evidence reported isn't nearly as strong as the articles indicate.

Every measurement is uncertain
No measurement is exact. Experimental precision is limited by experimental uncertainty. Independent of that uncertainty, measurements have no meaning. If I tell you that my random survey indicates that 100% of American football fans think the Raiders are going to the Super Bowl, you might ask how many people I polled, where I conducted the poll, and what question I asked (40,000, at the Oakland Coliseum, "Who's the best?"). You might reasonably conclude a bias in my measurement.

A group of unbiased observers suitable for polling.

Experimental uncertainties can be filed under two categories, statistical and systematic. Statistical uncertainties come from the amount of data that goes into the measurement. Because we have rigorous tools for analysing probability and statistics, statistical uncertainties are easy to find. We'll cover them in Part 2.

1 • 2 Next Page Last Page

Want to more of this to be delivered to you for FREE?

Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.

Got to make sure you're not a robot. Please enter the code displayed on the right.

Time to activate your subscription - it's easy!

We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.

We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.

Add New Comment
Visitor (To avoid code verification, simply login or register with us. It is fast and free!)
*Verify code:
Tech Impact

Regional Roundup
Control this smart glass with the blink of an eye
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."

GlobalFoundries extends grants to Singapore students
ARM, Tencent Games team up to improve mobile gaming

News | Products | Design Features | Regional Roundup | Tech Impact