Dec 12 2012

My Qualitative Coders Keep Disagreeing

Dear Dr. Phoebe,

I have hired two supposed “expert” coders to read a sample of newspaper articles and code each paragraph according to a coding scheme that I have carefully developed as part of my project in media studies. But the coder interreliability (reported by NVivo) is only .15. Should I be concerned? 

Vernon Szarhazy, SW12

 

Dear Vernon,

I’m afraid you have a problem with the reliability of your coders, your coding scheme, or both. What NVivo has reported is a kappa score, ranging from 0 (no agreement) to 1.0 (perfect agreement). Kappa is similar to a correlation or percentage agreement, but also takes into account chance agreement (which will small anyway if you have lots of possible codes). Landis and Koch (1977, “The measurement of observer agreement for categorical data”, Biometrics 33:159–174) have a much overused table interpreting 0.15 as slight agreement, but let’s just say if that were your grade for MY429 Qualitative Content Analysis, you’d be resitting the course. You need to take a hard look at your coding scheme and ask whether it can be simplified. Training your coders better, or choosing more consistent coders, would also be a good idea.

Sincerely,

Dr. Phoebe

This entry was posted in Dr Phoebe. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *