Feminism and Biases in Research (sort of…)

When I’m surfing the net I quite often come across things that fill me with that special kind of rage, which I’m sure anyone who’s ever read the comments underneath a YouTube video will have felt too (http://xkcd.com/386/).  A prime example of the type of things that awake this rage is the following: http://borderhouseblog.com/?p=5811 (Organisers of a launch party for the PS3 game ‘Battlefield 3’ decided that the best way to prevent women from being offended by misogynist language often used by male gamers, was to ban women from attending the event at all. What an inspired solution! *headdesk*). Misogynist or sexist behaviour is one of my biggest pet-hates; even when it’s inadvertent, or just down to ignorance.   In fact, in some ways, inadvertent sexism is worse than deliberate sexism as it represents the persistence of acceptable misogynistic norms in our society and will undoubtedly lead to biases.

These days of course there doesn’t seem all that much for feminists to get angry about (apart from slightly moronic jokes about telling women to “get back in the kitchen and make me a sandwich”) however, there are still many inequalities and biases against women in the world today and it is important that we recognise them and try to change things; particularly within the domain of science. There is a huge feminist movement within the social sciences to do with conducting “feminist science” and preventing biases against women in scientific research. It is my opinion that this is a hugely important endeavour; and I say this not just from a feminist perspective but also from the perspective of someone who aspires to be (hopefully) a scientist one day. Biases in society; be they sexism, racism, homophobia, or any other; will lead to biases in research, which can only impede the progress of science.

Currently there is almost no psychological research in the areas of gay attraction, relationships or civil partnerships; most research into the concepts of love and attraction has been conducted with monogamous heterosexual couples in mind because, until very recently, being gay was simply not socially acceptable. This means that a huge aspect of human relationships has barely been studied at all. Another bias in established psychological research is that, from the time psychology began to be investigated until about the 1970s or 80s, the prototypical research participant was the typical student at the time: white, male, well-educated and either upper- or middle-class. The same goes for the majority of researchers. The data gathered from these participants was considered the norm and became the standard against which all other groups and populations were judged. Also the methods of testing these participants also became the standard way of testing that phenomenon in all groups. Obviously these kinds of research biases led to erroneous conclusions. In 1976 Kohlberg claimed that: compared to men, women tended to be less morally developed. Gilligan argued against this in 1982 by pointing out that the moral dilemmas on Kohlberg’s rating scale focused almost exclusively on a ‘justice orientation’, which because males and females are socialised in different ways was more likely to favour the males. When females were tested on moral dilemmas with a ‘care orientation’, they scored just as highly for moral development as the males, just in a different way.

I won’t say that my Battlefield 3 example means that society is in some kind of misogynistic downwards spiral, but, since it is clear that norms and biases in society can not only affect us personally but also in the scientific domain, perhaps we as potential researchers should think more carefully about such instances as they may indicate a growing social bias that may one day affect our own research.

(It does interest me that whenever I go on the SONA website and someone is doing a videogame study, they only want male participants. How come the girls can’t get course credit for gaming?)

 

Sources and Things of Possible Interest:

http://plato.stanford.edu/entries/feminism-epistemology/

http://www.oii.uwa.edu.au/__data/assets/pdf_file/0005/1855490/Epistemology-and-methodology.pdf

http://www.cddc.vt.edu/feminism/sci.html

http://0-psycnet.apa.org.unicat.bangor.ac.uk/journals/dev/18/6/862/

http://books.google.co.uk/books?hl=en&lr=&id=0ckWCCRlU-MC&oi=fnd&pg=PR7&dq=Eichler,+M.+and+Lapointe,+J.+On+the+Treatment+of+the+Sexes+in+Research.+1985.&ots=4NJi5JVnr4&sig=7EHwHNEVyfL7nik7UbHlfg_BAzU#v=onepage&q=Eichler%2C%20M.%20and%20Lapointe%2C%20J.%20On%20the%20Treatment%20of%20the%20Sexes%20in%20Research.%201985.&f=false

http://en.wikipedia.org/wiki/The_Second_Sex

http://www.techsoc.com/hasfem.htm

www.experiment-resources.com/research-bias.html

http://www.bungie.net/Forums/posts.aspx?postID=67018993&postRepeater1-p=1#67019164

Advertisements

Something Quite Interesting?

In this week’s episode of QI they talk briefly about the difference between what scientists think of as a theory and what people in general think of as a theory; the old “evolution is JUST a theory” chestnut. It was indeed quite interesting and also quite relevant to some blogs, so I thought I’d post it here just for some stats/research methods funsies.

http://www.bbc.co.uk/iplayer/episode/b016ytsk/QI_Series_I_Illness/

The relevant bit starts at about 11mins &20 secs in ( and Ben Goldacre, author of ‘Bad Science’ is in this episode too!)

Homework for my TA Week 5

http://prpij.wordpress.com/2011/10/14/do-ethics-limit-research/#comment-13

http://tinastakeon.wordpress.com/2011/10/14/statistics-can-help-to-keep-us-safe/#comment-20

http://paintingwithnumbers.wordpress.com/2011/10/14/%e2%80%9cjust-because-it%e2%80%99s-significant-doesn%e2%80%99t-mean-it%e2%80%99s-significant%e2%80%a6-discuss-%e2%80%9d/#comment-17

http://notwilliam.wordpress.com/2011/10/14/ethicandanimals/#comment-46

“Does the empirical method of scientific exploration disagree with the rational method?”

In my opinion the two methods do not disagree with each other at all; in fact I think that you cannot have one without the other. Despite being apparently opposing methods of exploring a scientific question, they actually complement each other very well. By using the rational method we can come up with hypotheses that make logical sense and by using the empirical method we can test these hypotheses scientifically.

The rational method is mainly composed of using logic to posit a possible explanation for a phenomenon, whereas the empirical method is about gathering data to investigate a possible explanation for a phenomenon. It is obvious that the scientific method requires empirical evidence to support or disprove a hypothesis, and so empiricism has become almost synonymous with science. However, rationalism is also required for the scientific method. By thinking rationally about a subject to be investigated, we can conceive logical theories about that subject and from this, design a scientifically testable hypothesis. This is extremely important for science as it allows us to explore logical possibilities and, by this more abstract theorising, come up with new avenues for research. The empirical method allows us to test hypotheses and the rational method allows us to consider of new ones. And thus the cycle of science continues.

 

http://www.simplypsychology.org/science-psychology.html

http://www.dummies.com/how-to/content/philosophical-battles-empiricism-versus-rationalis.html

http://personal.stevens.edu/~ysakamot/730/basic/

http://www.experiment-resources.com/empirical-research.html

Homework for my TA: Week 3

Here you go (try not to get too excited):

http://vickygoodwin.wordpress.com/2011/10/06/do-you-need-statistics-to-understand-data/#comments

http://psychblogld.wordpress.com/2011/10/06/do-you-need-statistics-to-understand-your-data/#comment-14

http://psuce1.wordpress.com/2011/10/07/do-you-need-statistics-to-understand-your-data/#comment-8

http://psuc5d.wordpress.com/2011/10/07/do-you-need-statistics-to-understand-your-data/#comment-13

An extremely brief overview of Ethics and their history

Ethics: a necessity or just another limiting factor for our research projects? I’m sure we’d all love to be able to conduct any research we like without having to fill in all those tedious forms for the ethics committee. Then we really could poke people with a massive stick and see what happens. Some of the most famous experiments (Zimbardo’s prison study, or Milgram’s research into the power of authority figures to name but two) would never make it past ethics these days. But why are ethics so important? However much we might complain about not being able to get our participants wildly drunk before an experiment to see how it affects our behaviour, I think we can all see how possibly harmful that could be to our participants. And preventing harm is exactly what ethics were developed for and why they have evolved and grown over the years.

The history of modern research ethics has its roots in the Nuremberg Trials of 1946, ‘47 and ’48, when the various atrocities committed by the Nazi regime came to international public attention. A number of medical experiments were examined during these trials; including many where the only aim seemed to be finding out the most efficient way of killing people; by deliberately exposing them to disease, toxins, or extreme conditions (Investigators at the trials said that these doctors had not been doing medical research but merely researching Thanatology). The doctors and scientists involved claimed that what they had done could not be deemed illegal as the laws governing experimentation at the time were either sketchy or nonexistent. To prevent further such experimentation the Nuremberg code was devised; a code of ethics for human experimentation that is the basis for all modern ethical research guidelines. The most significant were those guidelines which prevented harm to participant, and those which stated that the participant must give informed consent; as, prior to the Second World War, experiments without consent and with the possibility of harm were not uncommon in the scientific community across the world.

The Helsinki Declaration of 1964 was the first attempt by the medical community to regulate experimental procedure; in others words, to set an ethical standard for human research which, although not a law, should be a moral and professional benchmark for all scientist engaged in human research. This means that the ethical guidelines laid out in the Declaration of Helsinki do not only protect participants but also the integrity of research and science. Researchers are encouraged to work to the highest possible ethical standard, which engenders public trust in science. So not only do ethics provide us with moral guidance, they also help to prevent unethical work from bringing science into disrepute.

Almost every discipline has its own set of ethical guidelines (http://www.businessweek.com/careers/content/jan2007/ca20070111_219724.htm), for the most part based on ideas found in the Nuremberg Code and the Declaration of Helsinki and Psychology is no different. APA gives us 5 ethical principles to abide by:

  • Beneficence and Nonmaleficence
  • Fidelity and Responsibility
  • Integrity
  • Justice
  • Respect for People’s Rights and Dignity

(http://www.apa.org/ethics/code/index.aspx?item=3#)

And of course being the APA, therefore generous and not at all overly prescriptive (especially about citations and references), they also give us 5 principles to help us avoid ethical quandaries, arguments and other mix-ups:

  • ·         Discuss intellectual property frankly
  • ·         Be conscious of multiple roles
  • ·         Follow informed-consent rules
  • ·         Respect confidentiality and privacy
  • ·         Tap into ethics resources

(http://www.apa.org/monitor/jan03/principles.aspx)

I’ve always felt that the importance of ethics was obvious; basically don’t do something that could hurt someone else and don’t force someone to do something they don’t want to. These ideas are so ingrained in my conscience that I’m still surprised that they feel the need to try and teach us these things in lectures etc. (Although Neil’s interpretation of ethics in The Inbetweeners movie this summer did make me wonder…) However, it is clear from events in history that we need ethical guidelines to protect participants and that these guidelines should be reviewed often to keep up with the areas and techniques in research. Otherwise we risk becoming so caught up with making scientific progress that we risk damaging people and damaging our field.

 

Really, there was too much history and too much content for me to possibly include it all so here are some extra links that you might find interesting:

http://books.google.co.uk/books?hl=en&lr=&id=bv8IAqVh8EAC&oi=fnd&pg=PR7&dq=medical+experiments+nazi&ots=-DmOtkrXqr&sig=_GvSOYaQ6CPCsBmPOsCCbG-k3ig#v=onepage&q=medical%20experiments%20nazi&f=false

http://psycnet.apa.org/journals/rev/51/4/229/

http://books.google.co.uk/books?hl=en&lr=&id=_VH-7oeT4lEC&oi=fnd&pg=PR15&dq=nazi+experiments+psychology&ots=7QExWtA3bG&sig=EKyAfS0RQdJAP6QrdsUbzlNHiY8#v=onepage&q&f=false

http://en.wikipedia.org/wiki/Nuremberg_Trials

http://ohsr.od.nih.gov/guidelines/nuremberg.html

http://en.wikipedia.org/wiki/Declaration_of_Helsinki

http://listverse.com/2008/09/07/top-10-unethical-psychological-experiments/

http://www.youtube.com/watch?v=e5I6d_vq-Cc (related to no. 2 on the above listverse entry)

http://www.youtube.com/watch?v=UST26RjBXvo (some light relief after all this ethics stuff)

 

Do you need statistics to understand your data?

In almost every instance the sheer volume of raw data collected in an experiment means that it would be nigh on impossible to extract any meaning from it without using statistics. Last year, when we got our first big table of data to run through SPSS, I can remember staring blankly at the mass of numbers for some time in the hopes that it would magically tell me what to do or what it meant. In the 1999 film ‘The Matrix’ they can interpret 10 screens of green, falling, coded data at once just by looking at them; compared to that, how hard can a few hundred memory scores be, right? Well the answer is: very difficult! I’ve yet to come across anyone who could find the kind of information we need for research without using statistical tools.
What we are looking for in research are patterns or connections between variables, and to find these we need statistics. How do we tell if there is a difference between variables? We could draw a graph or chart. How do we know if the difference is significant and not due to chance? We can calculate some lovely p-values. Of course there is always the danger of mistaking the need for statistics as the need for a statistical program like SPSS. All SPSS really gives us is another long list of numbers, which on their own can be almost as unintelligible as the raw data itself. Without an understanding of the research in question and a good knowledge of the data gathered, statistical values are pretty much meaningless. There is no use in creating those graphs or calculating those p-values if we don’t know how they fit in with the rest of the data and what they mean for the research in general. So, while we need statistics to understand our data, we also need to understand the data itself.
Imagine the level of statistics needed for the Human Genome Project, which mapped the sequence of human DNA, given that estimates put the number of base-pairs of nucleotide in the human genome at over 3 billion. Also, given that some centres aiding the project were processing up to 100, 000 draft sequences every day for nearly a decade. Without statistics it would have been impossible for them to account for overlaps and the recurring sequences that make up more than 50% of human DNA. But, equally, without the scientists understanding and careful interpretation of the data, the sequencing could have been a shambles. There can be no doubt that while it is vital for research, you cannot fully understand your data with statistics alone.

(www.nature.com/nature/journal/v409/n6822/pdf/409860a0.pdf)
(http://en.wikipedia.org/wiki/Human_Genome_Project)

Are there benefits to gaining a strong statistical background?

There are many people in the world today who mistrust statistics. They argue that statistics can be hard to understand and are often used to mislead the public. Such mistrust has its roots in the overuse of statistics by advertisers and the media, where statistical values are often presented misleadingly or grossly out of context. Gaining a good understanding of statistical theory can help us differentiate between strong and weak statistical evidence which could help us to make day to day decisions (such as which product to buy or which company to invest in).

A strong background in statistics is not only beneficial  to our lives in general  but it is also vital to scientific research. Statistics allow scientists to interpret large quantities of data with relative ease. Correlations and interactions between variables cannot be identified by looking at the raw data alone; but the appropriate and correctly interpreted statistical test can find such relationships and examine whether or not they are significant. A comprehensive statistical examination of data can also reveal patterns or anomalies the investigator may not have anticipated, providing new avenues for research in that area; or it could bring together data from several congruent investigations to look for an overall, definitive pattern.

Nowadays, with the advances in statistical tools such as SPSS, there is very little excuse for weak statistics in an investigation; any research, no matter how groundbreaking or carefully conducted is unlikely to get any recognition unless it contains a strong statistical analysis of the data. In other words, gaining a strong background in statistics will benefit you in day to day life and in your scientific research. If applied properly it allows you to: analyse your data, strengthen your argument, identify patterns, find points for future research and (hopefully!) get your research accepted.