Friday, March 6, 2009

Misunderstanding science

I just received a publication for coaches with an article on "how to read a research paper." Although well-intentioned and containing some good points, the article also shows how little people know about science and how distorted their understanding really is. I have often remarked that even practicing scientists -or I should say card-carrying scientists, if that is what one can call PhD's- promote ideas that are very unscientific indeed. Their track record is often as poor as that of the general public.

There are erroneous statements throughout the article I mentioned, but here is the creme de la creme. First, about "proving," the author states, "science does not really "prove" anything: what science does is to find the truth." As I have remarked many times before, equating science with truth is pervasive in Western societies and it is wrong. What science really does is build models and prove that those models are internally consistent and work to approximate the truth within certain boundaries. 

Newtonian mechanics is a model that works very well under normal circumstances of every day life. Within those boundaries, it approximates the "truth" to within several decimal places. It no longer functions adequately when speeds are approximating the speed of light. Similarly, the Ptolemian model -with the earth in the center- is useful for everyday navigation on the high seas and for those who find themselves with dead batteries in the GPS. It is sorely inadequate for space applications.

Another great example is the "body of evidence" argument. Here the author basically argues that a single study should never be used to overturn a body of evidence, meaning -in this case- a number of previous studies. This idea, that science is about democratically voting what is right and what is wrong, is also very pervasive. It is very wrong too. Even a single observation can invalidate centuries of consensus. Science is not democratic in that sense.

While it is true that practitioners in a scientific discipline hold a consensus -what Kuhn called a paradigm- and even interpret new findings within that context, that does not mean that such a consensus is in any way "scientific." As a matter of fact, many historians of science have shown that precisely such a consensus often prevents new breakthroughs ("revolutions") from happening sooner. The consensus is a modus operandi of the community, but it is not part of the scientific method. 

The third kicker is the assumption that publication in "peer reviewed journals" means something is scientifically proven. This too is deeply ingrained. It is one area where the scientific community is highly complicit and insincere. This loathsome behavior is endorsed financially as well and scientists are often awarded jobs and promotions based on the number of peer-reviewed publications they have. Unfortunately, the average peer-reviewed publication is no better and no more valuable than the average subprime mortgage.

Nothing should be called scientifically proven until it has been duplicated by at least one independent outsider. No amount of peer-review can substitute for a single, well executed experimental confirmation by another researcher.

There are many more problems with the article, but I will limit my review to just one more, very common misunderstanding. This one has to do with statistics. Once again the author can be forgiven because the vast majority of "card-carrying" scientists do not understand statistics either. It is probably fair to say that statistics is misused in about 3/4 of the peer reviewed papers. The numbers are even higher in biology, medicine, and related disciplines.

The argument made here is about statistical significance and it shows how easily people can be misled. The author argues that sometimes, an improvement of 1% can make the difference between a gold medal and no medal. He shows that in some disciplines, less than 1% separates the fourth placed athlete from the winner. No problem here.

However, then he goes on to equate that 1% difference with a 1% difference between experimental and control groups in a study. In such cases, 1% may not be enough to reach significance. What that means is the 1% is very likely due to chance. I.e. there is no difference. That is not how the article reads. It claims the 1% is real but somehow ignored because researchers think it is too small to matter.

He says:"In the real world of coaching, one has to look beyond the issue of chance and look at the actual effect. For instance, if an intervention can increase performance by 1%, it might be statistically unimportant. However, in real life a 1% increase could mean victory."

How is that for mixing apples and oranges? And believe me, it is an argument I have seen in many times in many places. If the 1% is real then surely it will matter in some circumstances. The problem here is that the study where the 1% difference occurred is not powered to show that that 1% is for real. The 1% is likely a fluke. What we need in such cases is not "expert judgement" to resolve an issue. In this case, there is no information to base an opinion on.

What we need is another study. Our current study simply does not have the information we seek. 

No comments: