Things I write when I can’t think of a specific topic

In a 2010 TED talk, Michael Shermer spoke about self-deception and belief; specifically how the human race seems hardwired to see patterns in the world around us (http://www.ted.com/talks/michael_shermer_the_pattern_behind_self_deception.html).  Our brains can interpret random patterns into coherent shapes, for example the Rorschach inkblot test or hidden object illusions such as: http://www.moillusions.com/2006/05/hidden-dalmation-dog.html. Shermer theorises that we may be evolutionarily predisposed to see patterns in the world around us, as such a skill would help us survive in a possibly hostile world. For example; learning to be cautious when we hear a rustle in the bushes as it might be a predator is far less costly than assuming that it’s just the wind and only finding out its a predator when we’re just about to be eaten. Or: if there is food on this tree now, there may be food on similar trees; or on the same tree next year etc. Thus our pattern seeking tendencies can be of great benefit to us; they can help us learn, make connections and (as Fay taught us last semester) allow us to make schemas so we can navigate the social world smoothly.

However this tendency to see patterns can lead us astray. As Shermer examines in his talk, pattern-seeking can lead to superstitious beliefs, paranoia or simply drawing false conclusions. This last one is particularly important to us as science students as there is always that temptation to infer a cause-and –effect relationship. Of course we get taught that we must very cautious about this, but then we also get taught that if p<0.05 then we can reject the null. YAY! The problem is that scientific investigation is almost like a high-tech extension of our innate pattern-seeking behaviour and is, in some ways, just as fallible to false conclusions as we are in our daily lives. There is a trend in science to rely on statistical significance a little too much when all it really says is how likely it is for something to have occurred by chance. But we, as the great pattern-seekers of the world, have a tendency to forget that a p value of 0.05 could mean that whatever happened is just an extremely improbable fluke.  Critics of the overuse of statistics in modern science have found many instances where a significant result has been accepted wrongly and too readily by the researchers and sometimes the scientific community in general (http://www.sciencenews.org/view/feature/id/335872/title/Odds_Are%2C_Its_Wrong).

While I support the use of statistics in science, I do think that our desire for patterns and connections in the world can lead us astray; and we should be very careful not to let a significant p-value become the statistical equivalent of superstitiously touching wood to guarantee good luck.

 

(Watch the TED talk guys, its a lot better than this blog, I promise!)

Advertisements
Leave a comment

4 Comments

  1. An extremely interesting well-written blog. I agree that there is a little too much emphasis in put on significance testing in Science, especially as its use has been heavily contended for a long-time. It seems strange that research has come to a consensus postulating that there is no need for significance testing in research, despite that fact that there is such emphasis placed upon it when teaching undergraduates; who are the future of science. If we have already established that significance testing is flawed, why are we still using the procedure. To quote an interesting piece of research I have mentioned in one of my blogs, “Schmidt (1996) argued that that a reform of teaching and practice was necessary to ensure that researchers learn that “the benefits they believe flow from the use of significance testing are illusory”. He proposed that teachers ‘revamp’ their courses to allow students to understand that reliance on statistical significance harnesses the growth of cumulative research knowledge as aforementioned, that the benefits believed to flow from statistical testing do not in fact exist; and that significance methods must be replaced with point estimates and confidence intervals. He went on to conclude that a reform is essential for the future progress of cumulative knowledge in psychological research.”
    Hubbard (2004) estimated that 94% of the papers published between 1990-2002 used significance testing. Hubbard and Lindsay (2008) suggested that ‘instead of/along with reporting p values in individual studies, researchers should provide estimates of sample statistics, effect sizes, and their confidence intervals. If we did use confidence intervals ‘instead of’ p-values, this would support the stance taken by Nelder (1999) the militant view to “abolish” the p-value culture. I personally believe that significance testing plays a valuable role in Science as it gives us an understanding of the probability that the result we have obtained could have occurred by chance. However, it is possible that too much emphasis has been placed upon the procedure adding to the ‘p-value culture’. There is an abundance of research out there arguing that significance testing is harnessing our development in research.

    Reply
  2. psuca7

     /  February 9, 2012

    A problem with a lot of research these days, is that even if something is found to be significant, it may still not be generalisable to the real world. Indeed, when we make inferences or assumptions about something, and find these assumptions to be true, we store that information for use at a later date, when a similar situation arises. Our tendency to create meaning from nothing both benefits us – as you mentioned about the survival techniques of being cautious – but can also hinder us, as we make the assumption that a particular event corresponds directly to something else, and we become superstitious. Another issue with the use of statistics today, is that your average Joe Bloggs will not bother to try and understand the information he is presented with; everyone appears to be enthralled by whether there is a significant effect or not. Biases can plague data as the researchers are so keen to find an effect to publish, that often they see patterns and effects where there are none.

    Reply
  1. Links to my comments for TA. Semester 2 Week 3 « psuc1a
  2. TA: Semester 2, Week 3 Comments « StatsBlog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: