Unfortunately, we only had 18 people take both the first and the second surveys, and of those, only 14 were valid for use in the test-retest because four were not complete. Typically, 30+ participants are required to have enough for parametric statistical analyses to be considered appropriate. What you really need is a normally distributed sample (whether there are 30 or 3000), and below 30 introduces a greater chance of either not getting that normal distribution or of greater degrees of error creeping into the statistics because of the effects of outliers (people that scored items particularly high or particularly low). Unfortunately, many folks that review articles submitted to journals just stick with that magic number of 30+ without really understanding why, which means that I have to bark up another tree re: validation procedures.
In any event, the two-way correlation of subscale scores on tests and retests was 0.74 at the 0.001 significance level. That's a strong positive correlation with normally distributed samples, which mean that it really looks pretty good. Many thanks to those who helped with this!
The instrument itself revealed a very interesting finding. People who answered generally positive statements about online learning positively tended to answer generally positive statements about face-to-face learning negatively, and vice versa. I didn't make them this way, but two subscales emerged on the instrument around the two sets of items (10 online and 10 face-to-face) and those subscale responses has strong negative relationships. In fact, they were so strongly correlated (positively or negatively), that we can predict up to 40% of the score of one subscale just by looking at the other subscale! This may seem obvious to those of you who feel that you are definitely a "face-to-face person" or an "online person," but it's not the current thinking in the ed tech world.
Now what should I do with all of this...?
Wednesday, August 12, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment