The library offers a range of helpful services. All of our appointments are free of charge and confidential.
The Chi-Square Test is used to test whether two categorical (nominal) variables are associated with each other. This test assumes that the observations are independent, and that the expected frequencies for each category should be at least 1 (NOTE: no more than 20% of the categories should have expected frequencies less than 5).
Note that this is a non-parametric test. There is no parametric version of a Chi-Square Test of Independence.
Running the above steps will generate the following output: a crosstab table between the variables you selected (e.g., indicating how many of each combination was present in your data), a Chi-Square Tests table that tells you whether your categorical variables are independent (p > .05) or associated (p < .05), and a Symmetric Measures table that tells you the effect size of the test.
For the Chi-Square Tests table, we generally read the “Pearson Chi-Square” row. The “Value” column tells you your Chi-square (X2) value, and the “Asymptotic Significance (2-sided) column tells you your p-value (p < .05 is generally considered statistically significant, which would indicate that the variables are associated).
Spearman’s rank-order correlation is used to determine the strength and direction of a relationship of the rankings of two variables. The variables can be ordinal or continuous. This test does not assume the variables are normally distributed. However, the relationship between the ranked values should be monotonic (i.e., an increasing OR decreasing relationship; not increasing AND decreasing).
Note that this is a non-parametric test; you could / should use a Spearman’s rank-order correlation if the normality assumption has been violated for your Pearson correlation (i.e., the parametric equivalent). You can also use this test if you wish to conduct a correlation on ordinal data (note: Pearson’s would not be appropriate here).
Running the above steps will generate the following output: a Correlations table that indicates the Spearman correlation (rho) between the variables, the significance value (p), and the number of observations (n).
Spearman’s rho can range from -1 (perfect negative) to +1 (perfect positive), and indicates the strength and direction of the relationship of the rankings of the two variables; p indicates statistical significance, with < .05 generally considered statistically significant (i.e., indicating a significant correlation between the rankings of the two variables). Here, we see a non-significant weak positive correlation between the two continuous variables.
The Wilcoxon signed-rank test is used to determine whether the median of a single continuous variable differs from a specified constant (similar to a one-sample t-test) AND / OR whether the median of two continuous variables from the same group of participants differ (similar to a paired-samples t-test). Both versions of this test do not assume that the data are normally distributed.
Note that this is a non-parametric test; you could / should use the Wilcoxon signed-rank test if the normality assumption has been violated for your one-sample t-test or a paired-samples t-test (i.e., the parametric equivalents).
Running the above steps will generate the following output: a Hypothesis Test Summary table and a One-Sample Wilcoxon Signed Rank Test Summary table that indicate the results of the test (p < .05 is generally considered statistically significant, which would indicate that the variable median differs from the test value), and a One-Sample Wilcoxon Signed Rank Test histogram that shows the frequency values of the selected column of data with the observed median overlaid on top.
Running the above steps will generate the following output: a Hypothesis Test Summary table and a Related-Samples Wilcoxon Signed Rank Test Summary table that indicate the results of the test (p < .05 is generally considered statistically significant, which would indicate that the medians of the two samples of the single group differed), and a Related-Samples Wilcoxon Signed Rank Test histogram that shows the frequency of the rankings (displayed as difference scores between the two samples of the single group). In this example, there are only positive difference scores / rankings because the data were created so that one column of data had higher values than the other column.
The Mann-Whitney U test is used to determine whether two groups’ medians on the same continuous variable differ (similar to an independent samples t-test). This test does not assume that the data are normally distributed, but is does assume that the distributions are the same shape.
Note that this is a non-parametric test; you could / should use the Mann-Whitney U test if the normality assumption has been violated for your independent samples t-test (i.e., the parametric equivalent).
Running the above steps will generate the following output: a Hypothesis Test Summary table and an Independent-Samples Mann-Whitney U Test Summary table that indicate the results of the test (p < .05 is generally considered statistically significant, which would indicate that the medians of the two groups differ), and an Independent-Samples Mann-Whitney U Test histogram that shows the observed frequencies in the fake data (here, the histogram for females in on the left and the histogram for males is on the right).
The Kruskal-Wallis H test is used to determine whether three or more groups’ medians on the same continuous variable differ (similar to a one-way ANOVA, with independent groups). This test does not assume that the data are normally distributed, but it does assume the distributions are the same shape.
Note that this is a non-parametric test; you could / should use the Kruskal-Wallis H test if the normality assumption has been violated for your one-way ANOVA with independent groups (i.e., the parametric equivalent).
Running the above steps will generate the following output: a Hypothesis Test Summary table and an Independent-Samples Kruskal-Wallis Test Summary table that indicate the results of the test (p < .05 is generally considered statistically significant, which would indicate that the medians of the k groups differ but does NOT indicate where this difference is), an Independent-Samples Kruskal-Wallis Test boxplot of the different categorical groups' values on the continuous variable, and a Pairwise Comparisons table that indicates which (if any) of the groups are different from one another (if p < .05, the two groups are statistically significantly different).
The Friedman test is used to determine whether one groups’ ranking on three or more continuous or ordinal variables differ (similar to a repeated measures one-way ANOVA). This test does not assume that the data are normally distributed, but it does assume the distributions are the same shape.
Note that this is a non-parametric test; you could / should use the Friedman test if the normality assumption has been violated for your repeated measures one-way ANOVA (i.e., the parametric equivalent).
Running the above steps will generate the following output: a Hypothesis Test Summary table and a Related-Samples Friedman’s Two-Way Analysis of Variance by Ranks Summary table that indicate the results of the test (p < .05 is generally considered statistically significant, which would indicate that the rankings between the three of more conditions differed but does NOT indicate where this difference is), a Related-Samples Friedman’s Two-Way Analysis of Variance by Ranks graph that shows the frequency of the rankings in each condition. In this example, the ranks are all “1”, “2”, and “3” as the data were created so that the different columns of data were distinct (i.e., did not overlap at all), and a Pairwise Comparisons table that indicates which (if any) of the conditions are different from one another (if p < .05, the two conditions are statistically significantly different).
Please note that there is no non-parametric alternative to a factorial ANOVA. If your factorial ANOVA does not meet the assumptions, you could try transforming your dependent variable data and running all assumptions again.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.