Module stikpetP.other.thumb_cle
Expand source code
import pandas as pd
from ..other.thumb_rank_biserial import th_rank_biserial
from ..other.thumb_cohen_d import th_cohen_d
from ..effect_sizes.convert_es import es_convert
def th_cle(cle, qual="vd", convert="no"):
'''
Rules of Thumb for Common Language Effect Size
----------------------------------------------
This function will give a qualification (classification) for a Common Language Effect Size (/ Vargha-Delaney A / Probability of Superiority)
The measure is also described at [PeterStatistics.com](https://peterstatistics.com/Terms/EffectSizes/CommonLanguageEffectSize.html)
Parameters
----------
cle : float
the common language effect size
qual : {"vd", others via conversion}, optional
rules-of-thumb to use, currently only 'vd' for Vargha-Delaney, otherwise a converted measure.
convert : {"no", "rb", "cohen_d"}, optional string
in case to use a rule-of-thumb from a converted measure. Either "no", "rb" for rank-biserial, or "cohen_d" for Cohen d.
Returns
-------
results : a dataframe with.
* *classification*, the qualification of the effect size
* *reference*, a reference for the rule of thumb used
Notes
-----
"vd" => Vargha and Delaney (2000, p. 106)
|\\|0.5 - CLE\\|| Interpretation|
|---------------|---------------|
|0.00 < 0.06 | negligible |
|0.06 < 0.14 | small |
|0.14 < 0.21 | medium |
|0.21 or more | large |
The CLE can be converted to a Rank Biserial Coefficient using:
$$r_b = 2\\times CLE - 1$$
Rules of thumb from the **th_rank_biserial()** function could then be used, by setting: *convert="rb"*. Where the *qual* is any of the options in th_rank_biserial()
This in turn can be converted to Cohen's d using (Marfo & Okyere, 2019, p.4):
$$d = 2\\times \\phi^{-1}\\left(-\\frac{1}{r_b - 2}\\right)$$
Rules of thumb from the **th_cohen_d()** function could then be used, by setting: *convert="cohen_d"*. Where the *qual* is any of the options in th_cohen_d()
Before, After and Alternatives
------------------------------
Before this you might want to obtain the measure:
* [es_common_language_os](../effect_sizes/eff_size_common_language_os.html#es_common_language_os) for the Common Language Effect Size for one-sample
* [es_common_language_is](../effect_sizes/eff_size_common_language_is.html#es_common_language_is) for the Common Language Effect Size for independent samples
The function uses the convert function and corresponding rules of thumb:
* [es_convert](../effect_sizes/convert_es.html#es_convert) for the conversions
* [th_rank_biserial](../other/thumb_rank_biserial.html#th_rank_biserial) for options for rules of thumb when converting to Rank Biserial
* [th_cohen_d](../other/thumb_cohen_d.html#th_cohen_d) for options for rules of thumb when converting to Cohen d.
References
----------
Marfo, P., & Okyere, G. A. (2019). The accuracy of effect-size estimates under normals and contaminated normals in meta-analysis. *Heliyon, 5*(6), e01838. doi:10.1016/j.heliyon.2019.e01838
Vargha, A., & Delaney, H. D. (2000). A critique and improvement of the CL common language effect size statistics of McGraw and Wong. *Journal of Educational and Behavioral Statistics, 25*(2), 101–132. doi:10.3102/10769986025002101
Author
------
Made by P. Stikker
Companion website: https://PeterStatistics.com
YouTube channel: https://www.youtube.com/stikpet
Donations: https://www.patreon.com/bePatron?u=19398076
Examples
--------
Example 1: Using Vargha and Delaney rules:
>>> cle = 0.23
>>> th_cle(cle)
classification reference
0 large Vargha and Delaney (2000, p. 106)
Example 2: Convert to rank-biserial and use Sawilowsky rules:
>>> cle = 0.23
>>> th_cle(cle, qual="sawilowsky", convert="rb")
classification reference
0 medium Sawilowsky (2009, p. 599)
'''
if convert=="no":
es = abs(0.5 - cle)
#Vargha and Delaney (2000, p. 106)
if (qual=="vd"):
src = "Vargha and Delaney (2000, p. 106)"
if (es < 0.06): qual = "negligible"
elif (es < 0.14): qual = "small"
elif (es < 0.21): qual = "medium"
else: qual = "large"
results = pd.DataFrame([[qual, src]], columns=["classification", "reference"])
elif convert=="rb":
rb = es_convert(cle ,fr="cle", to="rb")
results = th_rank_biserial(rb, qual=qual)
elif convert=='cohen_d':
rb = es_convert(cle ,fr="cle", to="rb")
d = es_convert(rb ,fr="rb", to="cohend")
results = th_cohen_d(d, qual=qual)
return(results)
Functions
def th_cle(cle, qual='vd', convert='no')
-
Rules Of Thumb For Common Language Effect Size
This function will give a qualification (classification) for a Common Language Effect Size (/ Vargha-Delaney A / Probability of Superiority)
The measure is also described at PeterStatistics.com
Parameters
cle
:float
- the common language effect size
qual
:{"vd", others via conversion}
, optional- rules-of-thumb to use, currently only 'vd' for Vargha-Delaney, otherwise a converted measure.
convert
:{"no", "rb", "cohen_d"}
, optionalstring
- in case to use a rule-of-thumb from a converted measure. Either "no", "rb" for rank-biserial, or "cohen_d" for Cohen d.
Returns
results : a dataframe with.
- classification, the qualification of the effect size
- reference, a reference for the rule of thumb used
Notes
"vd" => Vargha and Delaney (2000, p. 106)
|0.5 - CLE| Interpretation 0.00 < 0.06 negligible 0.06 < 0.14 small 0.14 < 0.21 medium 0.21 or more large The CLE can be converted to a Rank Biserial Coefficient using: r_b = 2\times CLE - 1
Rules of thumb from the th_rank_biserial() function could then be used, by setting: convert="rb". Where the qual is any of the options in th_rank_biserial()
This in turn can be converted to Cohen's d using (Marfo & Okyere, 2019, p.4): d = 2\times \phi^{-1}\left(-\frac{1}{r_b - 2}\right) Rules of thumb from the th_cohen_d() function could then be used, by setting: convert="cohen_d". Where the qual is any of the options in th_cohen_d()
Before, After and Alternatives
Before this you might want to obtain the measure: * es_common_language_os for the Common Language Effect Size for one-sample * es_common_language_is for the Common Language Effect Size for independent samples
The function uses the convert function and corresponding rules of thumb: * es_convert for the conversions * th_rank_biserial for options for rules of thumb when converting to Rank Biserial * th_cohen_d for options for rules of thumb when converting to Cohen d.
References
Marfo, P., & Okyere, G. A. (2019). The accuracy of effect-size estimates under normals and contaminated normals in meta-analysis. Heliyon, 5(6), e01838. doi:10.1016/j.heliyon.2019.e01838
Vargha, A., & Delaney, H. D. (2000). A critique and improvement of the CL common language effect size statistics of McGraw and Wong. Journal of Educational and Behavioral Statistics, 25(2), 101–132. doi:10.3102/10769986025002101
Author
Made by P. Stikker
Companion website: https://PeterStatistics.com
YouTube channel: https://www.youtube.com/stikpet
Donations: https://www.patreon.com/bePatron?u=19398076Examples
Example 1: Using Vargha and Delaney rules:
>>> cle = 0.23 >>> th_cle(cle) classification reference 0 large Vargha and Delaney (2000, p. 106)
Example 2: Convert to rank-biserial and use Sawilowsky rules:
>>> cle = 0.23 >>> th_cle(cle, qual="sawilowsky", convert="rb") classification reference 0 medium Sawilowsky (2009, p. 599)
Expand source code
def th_cle(cle, qual="vd", convert="no"): ''' Rules of Thumb for Common Language Effect Size ---------------------------------------------- This function will give a qualification (classification) for a Common Language Effect Size (/ Vargha-Delaney A / Probability of Superiority) The measure is also described at [PeterStatistics.com](https://peterstatistics.com/Terms/EffectSizes/CommonLanguageEffectSize.html) Parameters ---------- cle : float the common language effect size qual : {"vd", others via conversion}, optional rules-of-thumb to use, currently only 'vd' for Vargha-Delaney, otherwise a converted measure. convert : {"no", "rb", "cohen_d"}, optional string in case to use a rule-of-thumb from a converted measure. Either "no", "rb" for rank-biserial, or "cohen_d" for Cohen d. Returns ------- results : a dataframe with. * *classification*, the qualification of the effect size * *reference*, a reference for the rule of thumb used Notes ----- "vd" => Vargha and Delaney (2000, p. 106) |\\|0.5 - CLE\\|| Interpretation| |---------------|---------------| |0.00 < 0.06 | negligible | |0.06 < 0.14 | small | |0.14 < 0.21 | medium | |0.21 or more | large | The CLE can be converted to a Rank Biserial Coefficient using: $$r_b = 2\\times CLE - 1$$ Rules of thumb from the **th_rank_biserial()** function could then be used, by setting: *convert="rb"*. Where the *qual* is any of the options in th_rank_biserial() This in turn can be converted to Cohen's d using (Marfo & Okyere, 2019, p.4): $$d = 2\\times \\phi^{-1}\\left(-\\frac{1}{r_b - 2}\\right)$$ Rules of thumb from the **th_cohen_d()** function could then be used, by setting: *convert="cohen_d"*. Where the *qual* is any of the options in th_cohen_d() Before, After and Alternatives ------------------------------ Before this you might want to obtain the measure: * [es_common_language_os](../effect_sizes/eff_size_common_language_os.html#es_common_language_os) for the Common Language Effect Size for one-sample * [es_common_language_is](../effect_sizes/eff_size_common_language_is.html#es_common_language_is) for the Common Language Effect Size for independent samples The function uses the convert function and corresponding rules of thumb: * [es_convert](../effect_sizes/convert_es.html#es_convert) for the conversions * [th_rank_biserial](../other/thumb_rank_biserial.html#th_rank_biserial) for options for rules of thumb when converting to Rank Biserial * [th_cohen_d](../other/thumb_cohen_d.html#th_cohen_d) for options for rules of thumb when converting to Cohen d. References ---------- Marfo, P., & Okyere, G. A. (2019). The accuracy of effect-size estimates under normals and contaminated normals in meta-analysis. *Heliyon, 5*(6), e01838. doi:10.1016/j.heliyon.2019.e01838 Vargha, A., & Delaney, H. D. (2000). A critique and improvement of the CL common language effect size statistics of McGraw and Wong. *Journal of Educational and Behavioral Statistics, 25*(2), 101–132. doi:10.3102/10769986025002101 Author ------ Made by P. Stikker Companion website: https://PeterStatistics.com YouTube channel: https://www.youtube.com/stikpet Donations: https://www.patreon.com/bePatron?u=19398076 Examples -------- Example 1: Using Vargha and Delaney rules: >>> cle = 0.23 >>> th_cle(cle) classification reference 0 large Vargha and Delaney (2000, p. 106) Example 2: Convert to rank-biserial and use Sawilowsky rules: >>> cle = 0.23 >>> th_cle(cle, qual="sawilowsky", convert="rb") classification reference 0 medium Sawilowsky (2009, p. 599) ''' if convert=="no": es = abs(0.5 - cle) #Vargha and Delaney (2000, p. 106) if (qual=="vd"): src = "Vargha and Delaney (2000, p. 106)" if (es < 0.06): qual = "negligible" elif (es < 0.14): qual = "small" elif (es < 0.21): qual = "medium" else: qual = "large" results = pd.DataFrame([[qual, src]], columns=["classification", "reference"]) elif convert=="rb": rb = es_convert(cle ,fr="cle", to="rb") results = th_rank_biserial(rb, qual=qual) elif convert=='cohen_d': rb = es_convert(cle ,fr="cle", to="rb") d = es_convert(rb ,fr="rb", to="cohend") results = th_cohen_d(d, qual=qual) return(results)