The Magic of Research

As the cynical and jaded second years that we are, exhausted after two years full of research methods and statistics, we might laugh scornfully at someone who tried to tell us that psychology research can be magical. But guess what! It can be!

Sadly I don’t mean magic in the Harry Potter sense (though I do think Freud would have had a field day with Lord Voldemort; and probably the whole wand thing too…), but it is true that psychologists are using magic in their research. The tricks of stage magic are being used to investigate complex phenomena such as attention and perception. A good magician will be able to fool our senses by tricking some mechanism of cognition or attention; so by studying how these tricks work on our minds we could significantly further our understanding of human cognitive and perceptual functioning.

Some tricks of stage/performance magic use visual illusions to trick the audience; taking advantage of the way human visual perception works in order to make an elephant vanish etc. These types of tricks usually work by taking advantage of the laws of visual perception, which have already been studied a great deal, however, other types of tricks are classed as cognitive illusions (tricking the audience using higher-level cognitive functions), the exact mechanisms of which are not yet fully understood. Cognitive illusions usually involve some kind of mental misdirection, often of attention or causal inference, and are seen in most magic tricks.

Using the principles of stage magic to investigate cognitive functioning is a relatively untapped area at present, but it could be an excellent resource. One study, for example, used a magic trick to gather evidence for the attentional spotlight. Participants’ eyes were tracked whilst they observed a magician perform a simple magic trick involving misdirection and sleight of hand. It was found that participants would miss the trick even if they were gazing in the right area and, that whether they blinked or not during the trick did not affect whether they missed it; in other words the misdirection affected their attentional spotlight rather than their gaze. A similar effect has been shown using the Vanishing Ball illusion; participants did not direct their gaze to where the ball “vanished”, despite being fooled by the illusion. This indicates that it was their attention that was fooled by the trick and not their eyes.

So we really can use magic in research, which I think is pretty awesome, and there is a great deal that magic tricks could teach us about how our cognitive systems can be tricked and therefore how they work (for more examples and explanations see the links below). But what I think is really interesting about this is that magic tricks have been around for centuries; long, long before anyone even thought of psychology as a discipline, yet magic utilises complex psychological phenomena to fool and entertain us, and magicians have a control over human perceptions that we struggle to replicate in the lab. We could almost think of magic tricks as being the first behavioural experiments and magicians, the first psychologists.

More comments, sorry Thandi!


Blog 3: In which I try to explain about correlation and causation (Caution: some blogs may contain rambling).

Opening a stats textbook is, for me, very similar to opening the Ark of the Covenant (as in ‘Indiana Jones: Raiders of the Lost Ark’, gotta love that movie!). The cover is lifted and immediately my face begins to melt right off at the arcane horrors within. This happens every time I look at a page of stats a textbook. Can I therefore conclude that statistical textbooks cause face-melting? Note: if I can show causation then people in dark suits will take all the stats books away to be hidden away in area 51 examined by Top Men.

                Suppose I conducted an observational study and find that the frequency of face-melting incidents increases with increasing exposure to stats textbooks. I have found a strong positive correlation but I cannot infer causation from this as I have not controlled for any possible confounding variables. These face-melting effects may be common to all textbooks, my face may be hypersensitive or I might be allergic to paper, the melting may be caused by the stress or boredom I feel when opening the textbook and not by the textbook itself. What this experiment can tell me however, is that there is a strong covariation between exposure to stats textbooks and melting faces, which hints at the possibility of a causal relationship between the two.

                To demonstrate causation I would have to conduct an experiment in which all other possible causes for face-melting were controlled for and in which possible sources of bias were removed (random assignment, independent observations….. the usual). If at the end of this I found that the frequency of face-melting was significantly higher in the stats textbook condition than in the control conditions then I could infer that there is likely to be a causal relationship between the two variables (Causation, at last! Send in the Top Men!).

                It is worth noting however that both of these techniques were looking for a relationship between the variables and that how we evaluate the data does not determine whether we can infer causation, rather it is how the data was collected that determines this (for a better explanation of this see here:  What this means is that, although we are taught dogmatically that correlation never ever implies causation, it is possible for correlation to imply causation if the data was gathered by true experimental methods; as it is the control of possible biases and confounds within the experimental method that allows us to be able to infer causation. In other words, if I had collected my face-melting data experimentally but analysed it using a correlational or regressional statistical technique, rather than an ANOVA or t-test, then I could still possibly make a causal inference provided the analysis was appropriate and I had been rigorous enough in collecting my data.

                The dogma of “Correlation does not imply Causation” is potentially extremely misleading for any student of statistics. What it really means is that we cannot guarantee or even infer a causal relationship from a correlational research design, because no variables have been manipulated or controlled for. Also, the word ‘imply’ is problematic in this context. Here it is intended to mean guarantee, but in most cases it would mean ‘hint at’, which is in fact what any good correlational study should do; explore connections between variables to uncover possible relationships to be examined experimentally later on.


For your entertainment:

For your education: