Notes from Week 9, Wed 25th
lecture on the group projects.
II. Participation points
For the group projects: Instead
of turning in appendix with
participants with final report,
please out list of participants
with full name and group number
and give to us Monday or Tuesday
next week.
** Participant form in folders --
one for each project team. **
III. Dealing with the data from
group project
RESULTS SECTION:
Visual: Distribution of scores
Numerical: central tendency &
variability
Reliability: Subjective scoring
EVALUATION OF INSTRUMENT
Validity: Are you measuring what
you intended to measure?
Visual: Distribution of scores
Create graph either (a) by hand
or (b) with a program like
Powerpoint or Excel (only if you
already know how to use -- by hand
is just fine as long as it's
neat)
X-axis: Numerical value of scores
Y-axis: Number of people who got
each score (or range of
scores)
Label graph so it's easy to read.
If you group scores on X-axis
(i.e., 1-3, 4-6, 7-9), use 6-10
intervals, all equal.
Good test should yield a roughly
normal-looking distribution (bell
shape), since intelligence/
ability (what you are trying to
measure) is distributed this way
If test too hard, scores will
clump at bottom; if too easy,
will clump at top.
Note: Different types of graphs
might be appropriate depending on
your data. If you measured two
different abilities, two separate
graphs may be more appropriate
(check to see if distributions
look different). If you have
sorted people into categories
(visual thinker vs verbal
thinker), then you won't have
scores on X-axis, you'll have the
categories. Y-axis, again, will
be the frequency (number) of
people in the category.
Numerical: central tendency &
variability
Central tendency: Mean, median,
mode
Mean: add and divide by N --- ALL scores count
Median: find the middle score (or division point where half are above and half below)
Mode: look for the most frequent score
Symmetric distribution: Just give mean.
If seriously skewed (bunched up
at top or bottom) also give mode
OR median.
Note: If your test is classifying
people into CATEGORIES (i.e.,
autocratic leader vs. democratic
leader) you won't be able to use
the mean -- mode is the only one
of the three that applies.
Variability: Range or standard
deviation
Range: Max - min + 1
Example: 1 2 5 8 9 [9-1+1
= 9]
Standard deviation:
**Use only if you know what it
means and how to calculate.**
Basically, the standard deviation
is the average distance of a
score from mean
Reliability: Subjective scoring
Multiple judges: If you had
subjective scoring (open-ended
question, diagram or drawing),
you should both score the
results. Report how well you
agreed on the scores -- within one
point? Two points? Don't panic
if agreement is poor -- just
report this. Use the average of
the two judges scores as the
person's score.
Multiple types of questions:
If you had two different types of
questions, or measured two
aspects of an ability, calculate
subscores for the different
sections. Did people who scored
high on one section tend to score
high on the other section? This
is what you'd expect.
Validity: Are you measuring what
you intended to measure?
Face validity: Item has obvious
connection to what you are
measuring. Look at participant
comments for items they thought
were strange or irrelevant.
Construct validity: Test actually
measures what you think it does
(and not ability to follow
instructions, or emotional
ability if you are trying to
measure visual, etc.)
Item analysis:
Divide people into top half and
bottom half by their scores. Now
look at how these two groups did
on each item.
Are there any questions that the
bottom half did better on than
the top half? If so, this might
be a bad item. Look at it
carefully.
MISSING DATA:
If people didn't answer some of
the questions, you can (a)
exclude them from the results
If you have lots of missing data,
an alternative is to (b) just
calculate total score using the
items that everyone answered.
Note what you did in your results
section.
NOTE: Not all of these procedures will be appropriate -- it depends on the kind of data you collected and what you are illustrating with the graphs. Some people may report multiple means; some only one. What matters is that your visual and numerical elements summarize your data in a way that is easy for the reader to understand and appropriate for your data.