Wednesday, August 12, 2009

Test-retest revisited

Unfortunately, we only had 18 people take both the first and the second surveys, and of those, only 14 were valid for use in the test-retest because four were not complete. Typically, 30+ participants are required to have enough for parametric statistical analyses to be considered appropriate. What you really need is a normally distributed sample (whether there are 30 or 3000), and below 30 introduces a greater chance of either not getting that normal distribution or of greater degrees of error creeping into the statistics because of the effects of outliers (people that scored items particularly high or particularly low). Unfortunately, many folks that review articles submitted to journals just stick with that magic number of 30+ without really understanding why, which means that I have to bark up another tree re: validation procedures.

In any event, the two-way correlation of subscale scores on tests and retests was 0.74 at the 0.001 significance level. That's a strong positive correlation with normally distributed samples, which mean that it really looks pretty good. Many thanks to those who helped with this!

The instrument itself revealed a very interesting finding. People who answered generally positive statements about online learning positively tended to answer generally positive statements about face-to-face learning negatively, and vice versa. I didn't make them this way, but two subscales emerged on the instrument around the two sets of items (10 online and 10 face-to-face) and those subscale responses has strong negative relationships. In fact, they were so strongly correlated (positively or negatively), that we can predict up to 40% of the score of one subscale just by looking at the other subscale! This may seem obvious to those of you who feel that you are definitely a "face-to-face person" or an "online person," but it's not the current thinking in the ed tech world.

Now what should I do with all of this...?

Tuesday, August 4, 2009

Finished grading M2!

Wow - that was a whoooole lot of reading and grading, but I'm finally finished! (for now)

The GIDPs that you are putting together are impressive! Over the past few days I have seen many innovative ideas presented on lovely websites - all of which (as far I can tell) are completely free. This means that nobody here will have to rely on institutionally purchased software packages such as Blackboard to conduct distance learning. Not only are these services expensive for universities, they typically do not provide the robust features of a more eclectic choice of software. Kudos to all for such wonderful use of available technology!

The only thing I found with a number of projects that might need some attention is a lack of clarity on how students will be interacting with instructors. Student-student and student-content interaction is quite well represented, but with many projects there was no clear design for student-teacher interaction (STI). STI can take a bunch of forms, of course, from discussion to lecture to email and so forth, but one thing that's important is for the interaction to go both ways, so a lecture that does not include opportunities for questions (e.g., one included as a voice-over PPT online) does not satisfy this requirement. I would encourage everyone to take a look at their projects and clearly identify the ways that STI occurs. You may need to make some minor edits to beef up this component of the GIDP. 

Again, overall these were great, and the class is doing a great job! I look forward to seeing the final products emerge over the next week or so. Speaking of, we are in the home stretch now, so everyone hang on for a little longer and we'll be finished!

It's good to be back home. :D

Monday, August 3, 2009

Google Wave looks very cool

Check it out. I am looking forward to this one. Consider all of the educational possibilities for online courses. Perhaps I should include it in the Fall 5263 course...

Side note: I wanted badly to invest in Google stock when it was $160 earlier this year (presently $452), but did not do so because of potential conflict of interest in requiring my students to use the services when I might be benefitting from their traffic (trivially, perhaps, but you know how the US legal system is). This is something to consider for those of you who both invest in technology and use it in your courses. 

Thursday, July 16, 2009

Email expectations

Every time I go to conferences or open the Higher Education Chronicle, I see something about how students have unreasonable expectations for faculty responding to emails. And in pre-semester workshops, I keep hearing that "students these days expect an immediate response." Personally, I have not experienced this, but there are enough people talking about it that there must be some element of perceived truth to it.

Actually, my students have always been quite respectful of my time. If anything, I wish students would email me more frequently. I wonder what the difference between me and my students and other instructors and their students is... could it be the age of the students involved? I have some younger students, but by and large they are well into their 20s at the very youngest. It could also be the fact that I teach primarily graduate classes. Or maybe my responses are so lengthy that students are afraid to get me started!

What do you think? Do instructors have a responsibility to answer all emails immediately? What is an appropriate amount of time to allow to pass before re-emailing? How will you handle students who are attached to the Internet 12 hours/day and expect you to be "always on" and hyper-responsive to their messages?

Sunday, July 12, 2009

Twitter

So I started using Twitter last week. It's pretty interesting, but I usually forget to update frequently and remember only once in a while. Hopefully, I'll get better. One cool thing is that you can text your messages into twitter. You can follow what I'm doing if you like and see what it's like to be an overworked nurse educator.

Wednesday, July 8, 2009

Strange Stats

I started messing with stats on the survey. There aren't enough retests yet to look at test-retest reliability (best to have at least 25, preferably 30)but I ran some other statistical analyses instead. Cronbach's Alpha was .150 for initial reliability analysis of the scale as a whole (we want as get up to .6 or above if possible), and even factor analysis didn't identify any underlying dimensions.

I was puzzled.

So, I looked at the instrument and picked apart the items that were generally positive statements about online learning, then ran a reliability analysis on those: 0.78! Pretty good. The same test on the other items -those that were generally positive statements about face to face learning- yielded a 0.82. Wow. So there are essentially two scales (instruments, essentially) combined into one instrument. But why didn't factor analysis pick these up? That's what factor analysis is for: Determining which, if any, items on a test tend to hang together in terms of responses, or in other words, seeing which items people tend to answer similarly.

Well evidently, these two have a dichotomous relationship. Respondents that score the generally positive online statements favorably tend to answer the generally positive face-to-face statements negatively. Now, believe it or not, that's not in line with the current thinking about online and face-to-face learning. We do not tend to think about people as "online people" and "face-to-face people." Interesting, isn't it? I'll have to start looking at demographics at some point and see if there are any key hinges upon which responses swing one way or the other.

What are your thoughts thus far?

Monday, July 6, 2009

Test-Retest Reliability, Round Two

Ready for Round Two? Here is the link to the retest: Survey - Part Two.

While you're taking this second survey, do not actively try to remember the answer you selected for the first survey. That will just cause false positive reliability ratings. The idea here is to simply answer the questions on the survey as you feel are appropriate. Then, I'll compare answers from the first survey to the second survey to determine if the instrument measures what it purports to measure consistently.

If you missed the first one, but would still like to participate, you can find and complete original survey here. The originally survey will now automatically link to the retest when you complete.

Thanks again for the help!