Module stikpetP.other.thumb_cohen_kappa

Expand source code
import pandas as pd

def th_cohen_kappa(k, qual="cohen"):
    '''
    Rule of thumb for Cohen Kappa
    -------------------------
    
    Simple function to use a rule-of-thumb for the Cohen Kappa effect size.

    
    Parameters
    ----------
    k : float
        the Cohen Kappa value
    qual : {'landis', 'altman', 'fleiss', 'banerjee'}, optional 
        indication which set of rule-of-thumb to use.
    
    Returns
    -------
    pandas.DataFrame
        A dataframe with the following columns:
    
        * *classification*, the qualification of the effect size
        * *reference*, a reference for the rule of thumb used
    
    Notes
    -----
    Landis and Koch (1977, p. 165):
    
    |k| Interpretation|
    |---|----------|
    |-1.00 < 0.00 | poor |
    |0.00 < 0.20 | slight |
    |0.20 < 0.40 | fair |
    |0.40 < 0.60 | moderate |
    |0.60 < 0.80 | substantial |
    |0.80 < 1.00 | almost perfect |

    Altman (1991, p. 408):
    
    |k| Interpretation|
    |---|----------|
    |-1.00 < 0.00 | poor |
    |0.00 < 0.20 | slight |
    |0.20 < 0.40 | fair |
    |0.40 < 0.60 | moderate |
    |0.60 < 0.80 | good |
    |0.80 < 1.00 | very good |

    Fleiss et al. (2003, p. 609) and Banerjee et al. (1999, p. 6)
    
    |k| Interpretation|
    |---|----------|
    |-1.00 < 0.40 | poor |
    |0.40 < 0.75 | fair to good |
    |0.75 < 1.00 | excellent |
    
    Before, After and Alternatives
    ------------------------------
    Before using this function you need to obtain a Cohen kappa value:
    * [es_cohen_kappa](../effect_sizes/eff_size_cohen_kappa.html#es_cohen_kappa), or use
    * [es_bin_bin](../effect_sizes/eff_size_bin_bin.html#es_bin_bin)
    
    References
    ----------
    Altman, D. G. (1991). *Practical statistics for medical research*. Chapman and Hall.
    
    Banerjee, M., Capozzoli, M., McSweeney, L., & Sinha, D. (1999). Beyond kappa: A review of interrater agreement measures. *Canadian Journal of Statistics, 27*(1), 3–23. https://doi.org/10.2307/3315487
    
    Fleiss, J. L., Levin, B., & Paik, M. C. (2003). *Statistical methods for rates & proportions* (3rd ed.). Wiley-Interscience.
    
    Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. *Biometrics, 33*(1), 159–174. https://doi.org/10.2307/2529310
    
    Author
    ------
    Made by P. Stikker
    
    Companion website: https://PeterStatistics.com  
    YouTube channel: https://www.youtube.com/stikpet  
    Donations: https://www.patreon.com/bePatron?u=19398076
    
    '''
    
    
    if (qual=="landis"):
        ref = "Landis and Koch (1977, p. 165)"
        if (k<0):
            qual = "poor"
        elif (k<0.2):
            qual = "slight"
        elif (k<0.4):
            qual = "fair"
        elif (k<0.6):
            qual = "moderate"
        elif (k<0.8):
            qual = "substantial"
        else:
            qual = "almost perfect"

    elif (qual=="altman"):
        ref = "Altman (1991, p. 408)"
        if (k<0):
            qual = "poor"
        elif (k<0.2):
            qual = "slight"
        elif (k<0.4):
            qual = "fair"
        elif (k<0.6):
            qual = "moderate"
        elif (k<0.8):
            qual = "good"
        else:
            qual = "very good"

    elif (qual=="fleiss") or (qual=="banerjee"):
        if (qual=="fleiss"):
            ref = "Fleiss et al. (2003, p. 609)"
        else:
            ref = "Banerjee et al. (1999, p. 6)"
        if (k<0.4):
            qual = "poor"
        elif (k<0.75):
            qual = "fair to good"
        else:
            qual = "excellent"
            
    results = pd.DataFrame([[qual, ref]], columns=["classification", "reference"])
    
    return(results)

Functions

def th_cohen_kappa(k, qual='cohen')

Rule Of Thumb For Cohen Kappa

Simple function to use a rule-of-thumb for the Cohen Kappa effect size.

Parameters

k : float
the Cohen Kappa value
qual : {'landis', 'altman', 'fleiss', 'banerjee'}, optional
indication which set of rule-of-thumb to use.

Returns

pandas.DataFrame

A dataframe with the following columns:

  • classification, the qualification of the effect size
  • reference, a reference for the rule of thumb used

Notes

Landis and Koch (1977, p. 165):

k Interpretation
-1.00 < 0.00 poor
0.00 < 0.20 slight
0.20 < 0.40 fair
0.40 < 0.60 moderate
0.60 < 0.80 substantial
0.80 < 1.00 almost perfect

Altman (1991, p. 408):

k Interpretation
-1.00 < 0.00 poor
0.00 < 0.20 slight
0.20 < 0.40 fair
0.40 < 0.60 moderate
0.60 < 0.80 good
0.80 < 1.00 very good

Fleiss et al. (2003, p. 609) and Banerjee et al. (1999, p. 6)

k Interpretation
-1.00 < 0.40 poor
0.40 < 0.75 fair to good
0.75 < 1.00 excellent

Before, After and Alternatives

Before using this function you need to obtain a Cohen kappa value: * es_cohen_kappa, or use * es_bin_bin

References

Altman, D. G. (1991). Practical statistics for medical research. Chapman and Hall.

Banerjee, M., Capozzoli, M., McSweeney, L., & Sinha, D. (1999). Beyond kappa: A review of interrater agreement measures. Canadian Journal of Statistics, 27(1), 3–23. https://doi.org/10.2307/3315487

Fleiss, J. L., Levin, B., & Paik, M. C. (2003). Statistical methods for rates & proportions (3rd ed.). Wiley-Interscience.

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174. https://doi.org/10.2307/2529310

Author

Made by P. Stikker

Companion website: https://PeterStatistics.com
YouTube channel: https://www.youtube.com/stikpet
Donations: https://www.patreon.com/bePatron?u=19398076

Expand source code
def th_cohen_kappa(k, qual="cohen"):
    '''
    Rule of thumb for Cohen Kappa
    -------------------------
    
    Simple function to use a rule-of-thumb for the Cohen Kappa effect size.

    
    Parameters
    ----------
    k : float
        the Cohen Kappa value
    qual : {'landis', 'altman', 'fleiss', 'banerjee'}, optional 
        indication which set of rule-of-thumb to use.
    
    Returns
    -------
    pandas.DataFrame
        A dataframe with the following columns:
    
        * *classification*, the qualification of the effect size
        * *reference*, a reference for the rule of thumb used
    
    Notes
    -----
    Landis and Koch (1977, p. 165):
    
    |k| Interpretation|
    |---|----------|
    |-1.00 < 0.00 | poor |
    |0.00 < 0.20 | slight |
    |0.20 < 0.40 | fair |
    |0.40 < 0.60 | moderate |
    |0.60 < 0.80 | substantial |
    |0.80 < 1.00 | almost perfect |

    Altman (1991, p. 408):
    
    |k| Interpretation|
    |---|----------|
    |-1.00 < 0.00 | poor |
    |0.00 < 0.20 | slight |
    |0.20 < 0.40 | fair |
    |0.40 < 0.60 | moderate |
    |0.60 < 0.80 | good |
    |0.80 < 1.00 | very good |

    Fleiss et al. (2003, p. 609) and Banerjee et al. (1999, p. 6)
    
    |k| Interpretation|
    |---|----------|
    |-1.00 < 0.40 | poor |
    |0.40 < 0.75 | fair to good |
    |0.75 < 1.00 | excellent |
    
    Before, After and Alternatives
    ------------------------------
    Before using this function you need to obtain a Cohen kappa value:
    * [es_cohen_kappa](../effect_sizes/eff_size_cohen_kappa.html#es_cohen_kappa), or use
    * [es_bin_bin](../effect_sizes/eff_size_bin_bin.html#es_bin_bin)
    
    References
    ----------
    Altman, D. G. (1991). *Practical statistics for medical research*. Chapman and Hall.
    
    Banerjee, M., Capozzoli, M., McSweeney, L., & Sinha, D. (1999). Beyond kappa: A review of interrater agreement measures. *Canadian Journal of Statistics, 27*(1), 3–23. https://doi.org/10.2307/3315487
    
    Fleiss, J. L., Levin, B., & Paik, M. C. (2003). *Statistical methods for rates & proportions* (3rd ed.). Wiley-Interscience.
    
    Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. *Biometrics, 33*(1), 159–174. https://doi.org/10.2307/2529310
    
    Author
    ------
    Made by P. Stikker
    
    Companion website: https://PeterStatistics.com  
    YouTube channel: https://www.youtube.com/stikpet  
    Donations: https://www.patreon.com/bePatron?u=19398076
    
    '''
    
    
    if (qual=="landis"):
        ref = "Landis and Koch (1977, p. 165)"
        if (k<0):
            qual = "poor"
        elif (k<0.2):
            qual = "slight"
        elif (k<0.4):
            qual = "fair"
        elif (k<0.6):
            qual = "moderate"
        elif (k<0.8):
            qual = "substantial"
        else:
            qual = "almost perfect"

    elif (qual=="altman"):
        ref = "Altman (1991, p. 408)"
        if (k<0):
            qual = "poor"
        elif (k<0.2):
            qual = "slight"
        elif (k<0.4):
            qual = "fair"
        elif (k<0.6):
            qual = "moderate"
        elif (k<0.8):
            qual = "good"
        else:
            qual = "very good"

    elif (qual=="fleiss") or (qual=="banerjee"):
        if (qual=="fleiss"):
            ref = "Fleiss et al. (2003, p. 609)"
        else:
            ref = "Banerjee et al. (1999, p. 6)"
        if (k<0.4):
            qual = "poor"
        elif (k<0.75):
            qual = "fair to good"
        else:
            qual = "excellent"
            
    results = pd.DataFrame([[qual, ref]], columns=["classification", "reference"])
    
    return(results)