Analysing a binary vs. ordinal variable
2a: Test (Mann-Whitney U test)
The cross table and the multiple-compound bar chart from the example, showed that males and females appear to think differently about how much material was available. Since “5.3 The amount of…” is an ordinal variable, we might be tempted to do something with the median of each group. The median is the score in the middle, i.e. 50% of all cases score equal or higher than the median (for more information about the median see chapter 3.3.1). However, if you look closely at Figure 1 from the previous section, you might notice that the median will be the same for each group. Instead of testing if the medians are equal, often a check is done if the two groups might have the same distribution.
The Mann-Whitney U test (H. B. Mann & Whitney, 1947) can do this, by comparing the mean ranks of each group. Ranks are simply determined by first sorting all the scores on the ordinal variable, then the lowest score gets rank 1, the next one rank 2, etc. The highest rank possible is therefore the total number of cases. If two or more scores are the same, the average of the ranks they would have gotten is used. So, if for example the fourth score is a 9, the fifth is a 9 and the sixth is a 9, then the rank for score four, five and six will each be (4+5+6) / 3 = 5.
The exact p-value of the Mann-Whitney U test in the example is .008. This indicates that there is a .008 chance of having a Mann-Whitney U score as in the sample or even more extreme, if the two groups would have the same distribution in the population. Instead of ‘having a Mann-Whitney U score’ you can also place ‘having differences in mean ranks’.
If this chance is very low (usually low is considered below .05), then most likely the assumption about the population is not true, and the two will have a different distribution. If the chance is high (usually .05 or above), we will conclude that we do not have enough evidence to reject the assumption about the population.
In the example we already noticed from the visualisation that the females thought the amount of activities was sufficient or (far) too little. However we can also see this from the mean ranks. The mean rank for the Females was 14.05, and for the Males 25.90. Since ‘5.3 The amount of …’, was coded as 1 = far too little to 5 = far too many, a higher mean rank indicates that, that group tended more towards the higher end of the coding of the variable. In this case the higher mean rank for the males, suggests that they tended more towards ‘far too many’ then the females.
We can add the results of the Mann-Whitney U test to our report as for example:
An exact Mann-Whitney U test indicated that the mean ranks for male and female were significantly different, U(n1 = 11, n2 = 34) = 285.5, p = .008.
If you do not have an exact p-value, then use the approximated one. In that case the test-statistic Z is actually used. The report would then go something like:
A Mann-Whitney U test indicated that the mean ranks for male and female were significantly different, Z(n1 = 11, n2 = 34) = 2.845, p = .004.
Click here to see how you can perform a Mann-Whitney U test, with SPSS, R (Studio), Excel, Python, or Manually
Using non-parametric tests
Using Legacy dialogs
With R (studio)
Manually (Formulas and Example)
The formula for the U statistics is:
In this formula ni is the number of scores in category i, and Ri the sum of the ranks from category i.
Often however there are ties, and we then need to adjust for those. We then need the z-statistic:
The formula for SE (standard error) is:
The N is the total number of scors (i.e. n1 + n2) and Ti the tie correction for tie i. For each unique rank the Ti is determined by:
Where ti is the number of ranks tied for unique rank i.
Note: different example then the one used in the rest of this section.
We are given the scores of one group of people:
And another group:
Note that the number of scores in the first group is five, and in the second four, so:
If we combine both groups we get:
The lowest score is a 1, so this gets a rank of 1. Then there are three 2's, so these get ranks 2, 3, and 4, or on average 3. There is only one 3, so this gets rank 5, only one 4 which gets rank 6, and there are three 5's, so these get ranks 7, 8, and 9, or on average 8. Replacing the original scores with the ranks (average ones), and summing them up we get for the first group:
And for the second:
The U statistic of the first group is:
And for the second group:
We had three 2's, and also three 5's. So for the frequencies of ties we get the sequence:
Now calculate the adjustment for each frequency of ties
Then the standard error:
Finally the Z-score. If we use U1:
If we use U2:
For the two-tailed significance we can then use the standard normal distribution. Usually this is found either by using a table, or some software, but if you must know the formula would be:
The term ‘exact’ might give you the impression that you should always use it, since ‘exact’ sounds better than ‘approximate’. Some might indeed argue for this (for example Berger (2017)), but the ‘exact’ test often requires a lot more computational power (even for computers today), and in some cases there are those who actually claim the approximate test is preferred (see for example Agresti and Coull (1998)).
There are a few other tests that could be performed instead of a Mann-Whitney U test. For example a Fligner-Policello test, independent-samples Mood Median test, Wald-Wolfowitz runs test, and Kolmogorov–Smirnov test.
As a final note the Mann-Whitney U test is sometimes also called the Wilcoxon-Mann-Whitney test, or Mann-Whitney-Wilcoxon test. This is because Mann and Whitney expanded on an idea from Wilcoxon.
We are almost done with our analysis, but there is one more thing to do. Besides testing for differences, we also need to indicate how big the differences are. This is done using a so-called effect size and the topic for the next section.