I started messing with stats on the survey. There aren't enough retests yet to look at test-retest reliability (best to have at least 25, preferably 30)but I ran some other statistical analyses instead. Cronbach's Alpha was .150 for initial reliability analysis of the scale as a whole (we want as get up to .6 or above if possible), and even factor analysis didn't identify any underlying dimensions.
I was puzzled.
So, I looked at the instrument and picked apart the items that were generally positive statements about online learning, then ran a reliability analysis on those: 0.78! Pretty good. The same test on the other items -those that were generally positive statements about face to face learning- yielded a 0.82. Wow. So there are essentially two scales (instruments, essentially) combined into one instrument. But why didn't factor analysis pick these up? That's what factor analysis is for: Determining which, if any, items on a test tend to hang together in terms of responses, or in other words, seeing which items people tend to answer similarly.
Well evidently, these two have a dichotomous relationship. Respondents that score the generally positive online statements favorably tend to answer the generally positive face-to-face statements negatively. Now, believe it or not, that's not in line with the current thinking about online and face-to-face learning. We do not tend to think about people as "online people" and "face-to-face people." Interesting, isn't it? I'll have to start looking at demographics at some point and see if there are any key hinges upon which responses swing one way or the other.
What are your thoughts thus far?
Wednesday, July 8, 2009
Subscribe to:
Post Comments (Atom)
Can you trust the conclusios you got from this survey? What I mean is that ins't the "strange stats" a results of the population you are surveying? After all, this this population ARE you online students. Will not you get reliable results once you test the real population? Maybe not. Just thinking... RC
ReplyDeleteGood thinking, Rosane, but a Cronbach's alpha of 0.8 is pretty strong, and the scales are actually somewhat inverse of one another. It seems that face-to-face and online learning are two sides of the same coin in this case, which is why the stats ~seemed~ strange at first. Of course, they aren't really stange, it just took me a little thinking to figure out how to look at them. This is not uncommon, but it's not a "textbook" approach.
ReplyDeleteThe sample that these are based upon includes about 135 UNT education students as well as around 25 RNs from this class and an RN-BS course, but it's very clear that there is some underlying construct that we're measuring. The next step is to figure out exactly what it is.
Something that struck me about this process is that anyone who was simply following and performing textbook survey design and analysis would have probably given up after seeing the first two sets of analyses. I really worry about the way we teach research ("we" being nursing), because it's almost never as neat and linear as most courses make it out to be.