For my TA – comments due by friday week 9

Qualitative vs Quantitative

This week’s blog is about qualitative and quantitative research methods.  There are obvious pro’s and con’s for using each one, which I will go through and eventually try to work out which is the best method.

The first thing to explain is that a qualitative research method involves data in the form of words, pictures or objects, whereas a quantitative research method involves data in the form of numbers and statistics.  Straight away there will be people who immediately gravitate towards one or the other, depending on their hatred (or love) of stats.  Baring this in mind, let’s look more deeply at both…

Seeing as quantitative is all to do with numbers, this would mean that it is black and white; you either get it wrong or right, there is no room for negotiation.  Which is a good thing as it easy to determine whether it’s correct.  The qualitative method is, however, a lot more detailed and in-depth than quantitative.  Rather than simply being right or wrong, there is room for debate, and everyone’s opinions and views are just as valid as everyone else’s; the findings are open to interpretation.  Obviously in terms of research, the more detailed and in-depth a method, the better it will be, but it is very hard to condense findings down.  If you have a straight right or wrong answer, it’s simple: yes or no.  But with the qualitative method, because it’s so detailed, the findings are less able to be generalized and become a lot harder to categorize.  Also, due to its nature, this means that in a qualitative research method, the researcher starts off only roughly knowing what to look for in the study; it develops itself as the study progresses, whereas in a quantitative research method the researcher clearly knows from the offset what they’re looking for.

Another perhaps obvious disadvantage of using a qualitative research method is how time-consuming it is.  Although this provides depth to the data, it is not always realistic or appropriate to use.  On the flip side, conducting a quantitative research method is more efficient and quicker, so preferable to some people.  However, because of this and also due to its very definition, using a quantitative method may make a researcher miss any contextual detail.  If you are just looking at numbers, it becomes harder to see different circumstances and why something happened the way it did.
The last point I’d like to make is that about the researcher themselves.  In a quantitative research method, the researcher uses tools such as a questionnaire or some form of equipment to collect data, whereas in a qualitative research method, the researcher themself is the data-gathering instrument.  Again, like in the first point I made, there will be some who immediately gravitate towards one or the other method simply based on these descriptions, depending on your personality and how comfortable you are with interacting with others or not.

So to summarise, if you dozed off anywhere in there, this is what I’m saying:  the 2 different research methods, qualitative and quantitative, have different pro’s and con’s, some depending on personal preferences and some common sense.  Different research situations will require different methods, so it is not always appropriate to just choose one of the methods to use for every study.
So… which method do you prefer….?

For my TA – comments due by friday week 5

It took me a bit of time to work out how to do this so it might say I posted this on the Saturday?  But all the comments were made before the due time of midnight on Friday 28th October 🙂

Is it possible to prove a research hypothesis?

We’ve all been taught that it is very dangerous to say ‘this research study has been proven’ because if one little case is found which goes against what the findings of the research study was, the hypothesis would be disproven.

When I say, if one little case is found, I mean any example at any point in time.  It has been believed by some that there is no possible life on any other planet, but recent findings have found water on one somewhere with similar climate conditions to that on Earth, and on Earth wherever there is water there is some form of life, whether it be just a minute bacterial kind but still, it’s life.  Technically, if there was life, even bacterial, in this water, that would be aliens! spooky O.o (pretty sure I’m not making this up by the way!  I think my parents told me….)  Anyway, if this is true then it would disprove theories from the dawn of time that there is no other life in space apart from on Earth.  If one tiny bacterial example is found, it could disprove years and years of theories and hypotheses.

Ages ago it was believed that the Earth was flat, and explorers and sailors sailed away into the distance to see if this was true, to try and prove it.  When they never came back, people believed that they had accidently sailed over the edge, so ‘proving’ their theories.  When in actual fact it was more likely that they’d got captured and eaten by cannibals or something…  And if they did return, it was believed that they had simply not reached the edge of the world when they decided to return home.  We of course now know that the Earth is round (…sorry if I just gave the game away for some of you there…).  Okay, for you technical people, round-ish.

I would definately say that it is not possible to prove a research hypothesis.  There is a reason why our lecturers tell us never to put it in essays or assignments, simply because one example and it’s disproven and we would look stupid.  Yes, it’s true that disproving theories and hypotheses starts a paradigm shift and adds strength, detail and further knowledge to new theories and hypotheses.  So I’m not saying that disproving is a bad thing at all, I’m saying that in my point of view, it is not possible to prove a research hypothesis for absolute definite, as there is always a chance that it could be disproven.

Just because it’s significant, doesn’t mean it’s significant….

Let’s make it clearer…. ‘Just because it’s statistically significant, doesn’t mean it’s significant in real life….’  I don’t know about you guys, but I completely agree with this!  And especially the other way around – just because it’s statistically not significant, doesn’t mean it’s not significant in real life. I mean, don’t get me wrong, numbers do show us a lot of things but to me it’s silly to argue that significantly it’s a strict yes or no in real life if the maths shows perhaps that it was one decimal point that made the decision.

Sure, this may bring you to ask the question ‘Well what would you do then, Suz??’ and my answer is, I don’t have a monkey’s.  Of course, it completely makes sense that some clever clogs invented the critical value (the point where we would change our minds between accepting or rejecting the null hypothesis) because we as mathematicians, psychologists, and everyone-in-the-world need to be able to distinguish on the same level whether or not something is statistically significant.  There has to be a metaphorical line drawn somewhere to say, anything this side of it is significant, anything the other side isn’t.  And any numbers that are one hundredth of a decimal place either side?  Well, tough, the rules are set and that’s the way it goes!  This makes sense logically.  But logical like this doesn’t always make sense when applied to real life…

Let’s say, for example, a pattern appears to emerge where people who like horror films also like action films.  Is there an association here?  Let’s carry out a study… Looking at the results of us doing the mathsy bit, chi squared turns out at 6.62.  But the critical value is 6.635…. uh oh, it’s technically not statistically significant…. i.e. enjoyment of the 2 types of film is independent.  You may look at that and say, yes technically not significant, but being honest looking at the real world, people who like horror films also tend to like action films.  Do we ignore our findings and say nope, no point to this study?  All because of 0.015?

There are bound to be more serious studies where the findings and results would have had very different consequences for the scientific world had one decimal point not come up from their statistical analysis and ruined it all.  If this is the case, many would not report or publish their studies as technically, technically they weren’t significant… and, unfortunately this is actually happening (Jennions & Moller, 2002).  Surely by spending your time on a study, putting your hard work into it, just for it to be found to be statistically insignificant, it’s a shame not to publish it to the world?  Usually, for a study to be thought up and carried out, some bright spark spies a pattern somewhere and thinks ‘Oooh are those things connected somehow?’  and they carry out a study.  Surely if they spied that something in the first place, it’s worth letting people know even if it eventually turns out to be statistically insignificant?  That’s my thinking anyways, it’d be nice to hear your opinion 🙂

Do you need statistics to understand your data?

Lucky us… another blog about statistics… righteo then!

Again, for me one thing stands out straight away about this question: what type of data are they talking about?  Data can be defined as “Information in raw or unorganized form (such as alphabets, numbers, or symbols) that refer to, or represent, conditions, ideas, or objects.”  If it’s data in the form of words, then statistics clearly wouldn’t help.  But if it’s in the form of numbers, let’s say for example your results from a study you’ve conducted, then it seems obvious to me that yes you do need statistics to understand it.  Well actually, understand?  I would say ‘interpret’…  There may be a subtle difference there…  Understanding implies you know what it’s all on about, whereas interpreting would to me say that you could make it mean something in words rather than just having a bunch of figures.  For those who don’t have a clue what I’m jabbering on about, I’ll try and explain it, looking back to one of the research studies we did in first year.

I did this study with some pretty awesome peeps towards the end of last year ( not naming names…. but we rocked 🙂 ), trying to find out if different times of the day might affect people’s attention span and recall.  We thought this was super important because what if having 9am lectures was actually a reeeaaally bad idea… But I’m going off topic… So basically here’s the main point I’m trying to make! 

  • We have 31 participants. 
  • Each one is presented, at a randomly assigned different time of the day, with a list of 15 random words to try to remember and recall immediately.
  • Looking at our data, the participants who complete the study late at night have a mean recall score of 9.27.

Now, we can understand this data in that ‘oh the numbers look like they make sense, and we got a reasonable-looking mean out of it.’  But interpreting it, the average score of correctly recalled words of these certain participants was 9.27 out of the possible 15.  Which is pretty good considering they did it at night.  Which in turn makes you stop and think, maybe performing intellectual tasks is actually better to be done at night… My point: to me, understanding and interpreting are different things.  When looking at data in terms of numbers, statistics does help you understand it, which you can then in turn interpret into words.

So if you tuned out somewhere in all that, I’ll just briefly go over it again… What I’m saying is that firstly, it depends on what type of data you have.  Usually for us, it will be numbers.  When this is the case, we do need statistics to help us understand what those numbers represent, and in turn we can then interpret them into words and information that means something and can be explained to others.

Are there benefits to gaining a strong statistical background?

For anyone reading this so far, thanks for taking the time to 🙂 This is my first attempt at a blog…. ever…. let alone one for an assignment…! I’m not exactly the most technologically gifted person so here goes nothing…

The most straightforward and obvious answer to the question in the title of this blog is yes.  Having a strong statistical background not only allows you to back up your arguments in essays and important documents with statistical evidence that you can immediately understand, but also in every day life such as advertisements where companies make claims about their products.

Having a strong statistical background is a benefit most obviously because it tends to make life easier when dealing with maths and interpreting results from psychological studies, like we and many others have to do at university.  It takes less time to complete an assignment when you can actually understand what it’s on about!

Another benefit of having a strong statistical background is that you are not sucked in by cleverly worded statistics in advertisements.  For the majority of the people who don’t have a strong statistical background, seeing a statement such as ‘60% of women thought this was a good idea’ (this is a fictitious statement I just made up) would think the idea was a good one and tend to go along with it.  However – in an extreme case – this statistic could potentially mean that only 3 out of the 5 women who were asked thought that it was a good idea.  To those with statistical knowledge, this is glaringly obvious as not being nearly as convincing a piece of evidence as if say 1,000,000 women were asked, and out of them 600,000 thought it was a good idea.  Technically, both examples have a 60% agreement but having a strong statistical background makes it obvious how easily it is to be duped by such figures without looking at the details such as sample size.

So yes, there are benefits for being a statistics geek 🙂 I think so anyways… but what do youuuu think?

Next Newer Entries

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 19 other subscribers