The Other I

January 7, 2011

Should we give up on science?

Filed under: Intriguing Science — theotheri @ 5:20 pm

As I’ve said in my last two posts, The Decline Effect reveals another source of uncertainty, another reason why scientific facts might be wrong.  So is science so riddled with potential error that we should give it up?  Is it too biased to bother with?

Only people who have not understood the nature of science have ever thought that science was infallible.  What the Decline Effect has done is open up a source of doubt that might be much more gaping than we had previously suspected.  We don’t really know how big the problem might be.  But we know it’s there.

So is science worth all its trouble?  should we go back to relying on common sense and intuition and to believing what our elders tell us?  Isn’t that just as good?  Or maybe even better?

No, I don’t think we should discard the scientific pursuit.

First of all, look at what science – faulty as it may be – has done for us that no other approach has come near.  Science has put us on the moon and will probably get us to Mars.  It has eliminated small pox from the face of the Earth, and through vaccinations has saved millions of people from the devastations of polio, whooping-cough, and measles.  It has parked cars in our garages, put computers on our desks, mobile phones into our pockets, televisions into our homes.  In the last 150 years, it has increased human life expectancy in the world by more than 25 years.

Has science ever gotten things wrong?  Indisputably yes!  But it is invariably scientists themselves who have noticed that it was wrong and often righted it.  The Decline Effect was first noticed by a scientist and it is scientists who are going to find ways to reduce its distorting effects.

In that sense, the Decline Effect hasn’t changed anything.  An attitude of questioning has always been the most scientifically intelligent approach.

Maybe we just don’t always appreciate just how intelligent.



  1. Very interesting. I read your 3 posts on this, and the New Yorker article. I found the examples quite revealing, but I’m not quite sure what conclusions they lead to.

    For example, they all (or nearly all) seemed to be positive results rather than negative results. Eg drug X ameliorates the symptoms of condition Y, rather than X has no effect on Y.

    The other thing is that they were mostly to do with living things: psychology, medicine, biology, pharmacology, behaviour: all contexts characterised by great complexity, and where what exactly constitutes a ‘controlled experiment’ might be largely a question of scientific judgment.

    I don’t think it was until the end of the New Yorker piece that there was much about physics. I can’t comment on ‘the weak coupling ratio exhibited by decaying neutrons’, but the gravity example is intriguing. It suggests that the measuring technique might not have been as accurate as the scientists thought it was – not that the actual magnitude of gravitational force fluctuates from day to day. Surely if it did the earth would suddenly fly out of orbit, or at least jump to a different distance from the sun?

    What I think I’m getting at is the question of what qualifies as an ‘experiment’. In a school science lab pupils might boil water and measure the temperature when bubbles of water vapour start to form. But every time we make a pot of tea we don’t call it an ‘experiment’. Yet if water stopped boiling at 100 degrees and started boiling at 110 or 90 degrees we’d think we’d stumbled into a disaster movie.

    Extrapolate this to all the finely-tuned engineering our modern lives depend on. If the decline effect applied to ‘experimental results’ in the broad world of the physical sciences planes would start dropping out of the sky, computers wouldn’t work – etc etc. Our world relies on reliability, but if the decline effect applied generally across our shared material universe we could not rely on it.

    My hypothesis is that the decline effect is more about business and organisational politics and economics than it is about scientific methodology: ie it’s more a question of ‘How sure can we afford to be?’. Where would we get the funding from to test that hypothesis?

    Thanks, Chris.


    Comment by Chris Lawrence — January 8, 2011 @ 7:31 am | Reply

    • Your comment suggests – and rejects – a potential explanation for the Decline Effect that I did not even consider — that it is due to changes in the object being studied rather than a result of observer error. Like you, I think the former is a highly unlikely cause. Yes, there are changes that are significant – changes in the patterns of our communications world-wide, in our nutrition, in the scope and kind of information which we can access, the nature of our weapons of war and in our medical treatments, in the role of women, etc. But interesting and worthy objects of research as these might be , I doubt very much they are creating the Decline Effect.

      For one thing, as you point out, the effect moves from being positive to less positive. I think that is a result of a strong publication bias in favour of positive rather than negative results. The negative results rarely get into print. They don’t get into print if it is the first study on a subject or a replication.

      I have a colleague whose research in biology has been government-funded for the last ten years. If he had produced a string of studies indicating that earlier research could not be replicated, the funding would have dried up years ago. And no drugs company is going to fund a scientist who has a reputation for undermining the effectiveness of their products either.

      Actually, I agree with you that the bias has as much to do with business and government funding policies as anything else. What I am saying, however, is that I believe these factors have inadvertently created a bias in the application of the scientific method so that replication is not being sufficiently high-lighted in determining the validity of scientific results.

      I think the Decline Effect probably occurs less often in physics than in the “softer” subjects for two reasons. First, it is often impossible to replicate studies in physics. It is only possible to re-analyze the data. There is only one Hubble telescope out there, only one set of data sending environmental data back from various locations around the world etc. But when it is possible, it tends to be fairly rapid. Scientists rather quickly will report that they have been able to replicate reports of cloning or cold fusion. It takes a lot longer to study the effects of Vitamin D supplements on bone density measurements or even of varying cancer treatments.

      So actually, what I am saying is that the Decline Effect validates the scientific method, which says that we need replication like we need oxygen to breathe. The Decline Effect is the effect of our failing to replicate as vigorously as we need, and as a result, we have too many false positives that only slowly reveal themselves.

      I doubt, given the nature of funding in today’s world, that there is a total solution to this problem. But being aware of it is at least some protection against premature and unquestioning acceptance of findings that are insufficiently substantiated.

      Thank you for your comments. They inevitably get me thinking just a little bit more.


      Comment by theotheri — January 8, 2011 @ 2:18 pm | Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Blog at

%d bloggers like this: