Group Differences
This section discusses statistical tests of comparison. Select the test that based on the data structure.
Continuous (interval or ratio) and Ordinal Outcomes
- Compare a continuous dependent variable between the two levels of a binomial independent variable with an independent samples t-test (5.1 Independent Samples t-Test). Revert to the nonparametric Wilcoxon rank sum test (5.2 Wilcoxon Rank Sum Test) if the t-test assumptions fail.
- A special case arises when samples are paired. Paired samples are more like one-sample tests where the dependent variable is the difference between the pairs. Use the Paired Samples t-test (6 Paired Samples t-Test) or the nonparametric Wilcoxon signed-rank test (6.1 Wilcoxon Signed-Rank Test).
- If the independent categorical variable is multinomial, conduct an ANOVA (7.1 One-way ANOVA) test or the nonparametric Kruskal-Wallis test (7.2 Kruskal–Wallis Test).
Discrete (count) Outcomes
- Compare the proportions of a binomial outcome between two levels of a nominal independent variable with a two-sample z-test (8.1 Two Proportion Z-Test), chi-squared test of homogeneity (8.2 Chi-Square Test of Homogeneity), or Fisher’s Exact test (8.3 Fisher’s Exact Test).
- The Chi-square test of homogeneity (9.1 Chi-Square Test of Homogeneity) is the main way to compare a discrete dependent variable among the levels of a binomial or multinomial independent categorical variable. Revert to the nonparametric Fisher’s Exact Test (9.2 Fisher’s Exact Test) if the sample size is small. Handle the special case of paired samples with the Pairwise Prop Test (10.1 Pairwise Prop Test) or the nonparametric McNemar’s test (10.2 McNemar’s Test).