# How to calculate for significant difference between Cohen's Kappa values?

22 views (last 30 days)

Show older comments

### Answers (3)

Star Strider
on 6 Sep 2021

Edited: Star Strider
on 13 Sep 2021

I used Cohen’s κ many years ago. From my understanding, from reading Fliess’s book (and correspoinding with him), Cohen’s κ is normally distributed. An excellent (in my opinion) and free resource is: Interrater reliability: the kappa statistic . There are others, although not all are free.

EDIT — (13 Sep 2021 at 10:58)

To get p-values and related statistics for normally-distributed variables, the ztest function would likely be appropriate.

.

##### 0 Comments

Jeff Miller
on 14 Sep 2021

As I understand it, the fundamental question is whether tests A & B agree better than tests A & C, beyond a minor improvement that could just be due to chance (or agree worse, depending on how the tests B and C are labelled). The null hypothesis is that the agreement between A & B is equal to the agreement between A & C.

The most straightforward test for this case is the chi-square test for independence. Imagine the data summarized in a 2x2 table like this:

% Tests agree Tests disagree

% A & B group: 57 17

% A & C group: 35 8

with total N's of 74 in the first group and 43 in the second group. MATLAB's 'crosstab' command will compute that chi-square test for you. See this answer for an explanation of how to format the data and run the test.

Cohen's Kappa is a useful numerical measure of the extent of agreement, but it isn't really optimal for deciding whether the levels of agreement are different for the two pairs of tests.

##### 0 Comments

### See Also

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!