Answers to Practice Questions for Final:

(annotated with side comments in places)

Part IV:

Part IV: ADVANCED PROCEDURES

1. With a large sample, even very small differences can be significant, because you have high power. But significant is not the same as important. For example, if the difference in aggressiveness between meat-eaters and vegetarians was something like 0.2 on a 10 point scale, then this might translate into a hardly detectable difference in actual behavior. The difference would be real, but not particularly important.

[To give an another example: suppose I come up with a new coaching strategy for the GRE. I coach 500, and their scores are 2 points higher (significantly different) than those of students who receive no coaching. A 2-point difference in a GRE is hardly worth the time and effort--it won't make any appreciable difference for getting into graduate school.]

2. The F distribution is strongly positively skewed--it only has one "tail." When the variance estimated from the spread of the means is greater than the variance estimated from the scores within groups, F will be larger than 1. Significant Fs are all larger than 1, in the same "positive" right-hand tail. The test is not directional because it doesn't matter WHICH means are higher or lower, the variance is simply a measure of how spread out these means are.

3. Because our test statistic, F, is a ratio of variances. The variance "within" is calculated by pooling the estimated variance from each cell. But if we can't assume that all the cells are estimating the "same" population variance, then it's hard to know what an average of these estimated variances would mean.

4.1 f Cronbach's alpha is a commonly reported measure of reliability

4.2 d stepwise because the researcher doesn't decide which is most important, etc. -- the computer program does

4.3 g A clue here is the large number of items (no independent or dependent variables) which the researcher is trying to reduce into a small number of "factors"

4.4 c hierarchical because here, the researcher determines the "hierarchy" -- which variable "enters" the equation first, second, and so on

4.5 i Key indicator here is TWO dependent variables. So we have a multivariate procedure. The M in MANOVA means multivariate---multiple dependent variables.

4.6 a Data transformation is not a test--instead it is a procedure that people perform trying to get their data "ready" for an inferential test

4.7 b Key clue here is "rank" -- Suzanne as ORDINAL data -- first, second, third, instead of quantitative data. Students ranked 1 & 2 in the class may be very close in GPA, while the student ranked 3 is very different. Ranks don't capture the actual distance between scores on which the ranks are based.

4.8 e "if the two might be related" There are two procedures that "control" or "covary out" other variables -- partial correlation and ANCOVA. But this is specifically a study looking at association (related), not difference, so it's partial correlation, not ANCOVA

4.9 h This is a 2 X 2 design: boys/girls and China/U.S. The addition of a "control" variable --the covariate of reading ability, makes this an ANCOVA instead of an ANOVA.