Session Ready
Exercise

Inter-rater reliability

In the last exercise, you calculated some basic statistics for expert agreement. Now let's take a look at the Cohen's kappa statistic for measuring inter-rater agreement, or reliability, between two experts. Kappa is a more precise measure than simple percent agreement calculation because it accounts for the possibility of agreements occurring by chance.

Let's continue analyzing the opinions of two SMEs as stored in the sme data frame, this time using the cohen.kappa() function from psych. You will load psych as a part of the exercise.

Instructions 1/2
undefined XP
  • 1
  • 2
  • Load the psych package.
  • Find Cohen's kappa for the sme data frame.