# post hoc comparisons multcompare

9 views (last 30 days)
Arianna Menardi on 5 Jan 2021
Edited: Scott MacKenzie on 19 Oct 2021
Dear Experts,
I'm using the multcompare function to carry out post hoc comparisons in Matlab 2017b. The output I get includes a n*7 matrix where n= n of my comparisons, the first two columns are the groups I'm comparing, the 3rd column is the difference, the 4th column is the standard error, the 5th column reports the associated p-values, the 6th and 7th columns are the upper and lower bounds with 95% confidence. However, I'm missing the t values for each comparison. Does anyone know how to get them?

Scott MacKenzie on 31 Jul 2021
Edited: Scott MacKenzie on 31 Jul 2021
There are no t-values with the multiple pairwise comparisons tests performed by multcompare. These tests -- Tukey-Kramer, Scheffe, Bonferroni, LSD, or Dunn-Sidak -- are always performing a comparison between > 2 sets of data. They yield the sorts of statistics cited in your question, but not a t-statistic. A t-statistic is produced from a t-test which performs a single comparison betwen two sets of data.
##### 2 CommentsShowHide 1 older comment
Scott MacKenzie on 19 Oct 2021
I understand the confusion. There is something called the Bonferroni or Bonferroni-Dunn test, which is a pairwise comparisons test. There is also something called the Bonferroni correction, which is an adjustment to alpha, as you note.
The Bonferroni pairwise comparisons test uses the Bonferroni correction in computing a t-statistic which in turn is used to compute a critical difference (cd) for the comparisons. I've never seen this t-statistic reported in a research paper; it's just used internally in computing the cd. Pairs are considered to differ significantly if the difference in their means exceeds the cd.
What I describe above is the traditional way of doing multiple comparisons tests. I use my own MATLAB script (which does not use multcompare) for these sorts of analyses. As an example, for the following data
% 8 7 4
% 10 8 8
% 9 5 7
% 10 8 5
% 9 5 7
the output of a Bonferroni pairwise comparisons test typically looks something like this:
% ------------------------------------------------------------
% --------- Pairwise Comparisons (Bonferroni-Dunn) -----------
% ------------------------------------------------------------
% Pair 1:2 --> 2.600 > 2.438 ? * (significant)
% Pair 1:3 --> 3.000 > 2.438 ? * (significant)
% Pair 2:3 --> 0.400 > 2.438 ? -
% ------------------------------------------------------------
The critical difference above is 2.438. The difference between the means for the pair 1:2 comparison is 2.600. Since 2.600 > 2.348, conditions 1 and 2 are considered to differ significantly. Every stats package I've used generates output more-or-less like this for a pairwise comparisons test.
In a research paper, usually you just state that such-and-such a pairwise comparisons test was used and so-and-so pairs were found to differ significantly (p < .05) -- or something like that. A test statistic is not reported.
MATLAB's multcompare function is different. Instead of reporting a critical difference, a p-value is computed for each comparison. A significant difference is concluded typically when p < .05. Here are the data and the MATLAB code using multcompare for the same analysis as above:
M = [8, 7, 4; 10, 8, 8; 9, 5, 7; 10, 8, 5; 9, 5, 7];
[~, ~, stats] = anova1(M, {'c1', 'c2', 'c3'}, 'off');
c = multcompare(stats, 'display', 'off', 'ctype', 'bonferroni')
c = 3×6
1.0000 2.0000 0.1769 2.6000 5.0231 0.0343 1.0000 3.0000 0.5769 3.0000 5.4231 0.0146 2.0000 3.0000 -2.0231 0.4000 2.8231 1.0000
The pairs are identified in the first two columns. The difference in the means is in the 4th column and is bracketed by the 95% CI in columns 3 and 5. The p-value is in column 6. No test statistic is provided. As you can see, the result is the same: significant differences for pair 1:2 and pair 1:3.
Hope this helps.