Every time I go to conferences or open the Higher Education Chronicle, I see something about how students have unreasonable expectations for faculty responding to emails. And in pre-semester workshops, I keep hearing that "students these days expect an immediate response." Personally, I have not experienced this, but there are enough people talking about it that there must be some element of perceived truth to it.
Actually, my students have always been quite respectful of my time. If anything, I wish students would email me more frequently. I wonder what the difference between me and my students and other instructors and their students is... could it be the age of the students involved? I have some younger students, but by and large they are well into their 20s at the very youngest. It could also be the fact that I teach primarily graduate classes. Or maybe my responses are so lengthy that students are afraid to get me started!
What do you think? Do instructors have a responsibility to answer all emails immediately? What is an appropriate amount of time to allow to pass before re-emailing? How will you handle students who are attached to the Internet 12 hours/day and expect you to be "always on" and hyper-responsive to their messages?
Thursday, July 16, 2009
Sunday, July 12, 2009
So I started using Twitter last week. It's pretty interesting, but I usually forget to update frequently and remember only once in a while. Hopefully, I'll get better. One cool thing is that you can text your messages into twitter. You can follow what I'm doing if you like and see what it's like to be an overworked nurse educator.
Wednesday, July 8, 2009
Strange Stats
I started messing with stats on the survey. There aren't enough retests yet to look at test-retest reliability (best to have at least 25, preferably 30)but I ran some other statistical analyses instead. Cronbach's Alpha was .150 for initial reliability analysis of the scale as a whole (we want as get up to .6 or above if possible), and even factor analysis didn't identify any underlying dimensions.
I was puzzled.
So, I looked at the instrument and picked apart the items that were generally positive statements about online learning, then ran a reliability analysis on those: 0.78! Pretty good. The same test on the other items -those that were generally positive statements about face to face learning- yielded a 0.82. Wow. So there are essentially two scales (instruments, essentially) combined into one instrument. But why didn't factor analysis pick these up? That's what factor analysis is for: Determining which, if any, items on a test tend to hang together in terms of responses, or in other words, seeing which items people tend to answer similarly.
Well evidently, these two have a dichotomous relationship. Respondents that score the generally positive online statements favorably tend to answer the generally positive face-to-face statements negatively. Now, believe it or not, that's not in line with the current thinking about online and face-to-face learning. We do not tend to think about people as "online people" and "face-to-face people." Interesting, isn't it? I'll have to start looking at demographics at some point and see if there are any key hinges upon which responses swing one way or the other.
What are your thoughts thus far?
I was puzzled.
So, I looked at the instrument and picked apart the items that were generally positive statements about online learning, then ran a reliability analysis on those: 0.78! Pretty good. The same test on the other items -those that were generally positive statements about face to face learning- yielded a 0.82. Wow. So there are essentially two scales (instruments, essentially) combined into one instrument. But why didn't factor analysis pick these up? That's what factor analysis is for: Determining which, if any, items on a test tend to hang together in terms of responses, or in other words, seeing which items people tend to answer similarly.
Well evidently, these two have a dichotomous relationship. Respondents that score the generally positive online statements favorably tend to answer the generally positive face-to-face statements negatively. Now, believe it or not, that's not in line with the current thinking about online and face-to-face learning. We do not tend to think about people as "online people" and "face-to-face people." Interesting, isn't it? I'll have to start looking at demographics at some point and see if there are any key hinges upon which responses swing one way or the other.
What are your thoughts thus far?
Monday, July 6, 2009
Test-Retest Reliability, Round Two
Ready for Round Two? Here is the link to the retest: Survey - Part Two.
While you're taking this second survey, do not actively try to remember the answer you selected for the first survey. That will just cause false positive reliability ratings. The idea here is to simply answer the questions on the survey as you feel are appropriate. Then, I'll compare answers from the first survey to the second survey to determine if the instrument measures what it purports to measure consistently.
If you missed the first one, but would still like to participate, you can find and complete original survey here. The originally survey will now automatically link to the retest when you complete.
Thanks again for the help!
While you're taking this second survey, do not actively try to remember the answer you selected for the first survey. That will just cause false positive reliability ratings. The idea here is to simply answer the questions on the survey as you feel are appropriate. Then, I'll compare answers from the first survey to the second survey to determine if the instrument measures what it purports to measure consistently.
If you missed the first one, but would still like to participate, you can find and complete original survey here. The originally survey will now automatically link to the retest when you complete.
Thanks again for the help!
Saturday, July 4, 2009
Subscribe to:
Posts (Atom)