Babbie (2004) discusses response rates and follow-up mailings on pages 260-261. He states that return rates of 50% are acceptable to analyze and publish. 60% is good and 70% is very good. Of course, in our field of rhetoric & composition where strict social science methods are not necessarily as highly valued (as in the fields that Babbie refers to), I've seen surveys analyzed and written about with response rates much less than 50%. Presently, my response rate for population one, Phase One, was just over 10%. I've sent out a reminder, but have not yet checked if I received any additional responses. while a few contacts have provided excuses why they cannot participate, such a low response rate overall should be a danger signal. As Babbie states "a low response rate is a danger signal, because the nonrespondents are likely to differ from the respondents in ways other than just their willingness to participate in your survey. Rich and Bolstein (1991), for example, found that those who did not respond to a preelection political poll were less likely to vote than those who did participate" (p. 261).
I would imagine since the purpose of the survey was clearly stated towards digital composing and copyright, it might be that those who did not respond did not think they were qualified, did not think it was important, didn't understand what I was talking about, etc. I intuit that even when what appeared to be valid reasons for not participating, such as business-- still, the reasons might be more complicated. If your program is represented in the survey, and gets all the copyright questions wrong, that won't look good for the program. However, I really have no sure way of knowing who from what program answers in what way, since at the end of the survey individuals are asked to name their school - they can also choose other or can't answer.
Babbie does concede that the literature varies widely on what is an acceptable response rate. I'd think that since explaining context is so important in our field, regardless of my return rate, I will need to qualify taking anything seriously from the survey results. It may occur that the only purpose of the survey turns out the provide sorting for students who might be interviewed.
Babbie has some interesting observations and hints regarding follow-up mailings. There should be two follow-ups, at two week intervals. For Phase One, I've done the first follow-up. I will do one more the week after next after excluding those who responded to my first reminder. Next week I will do Phase Four, which is the follow-up to those I have not heard from regarding Phase Two (recruitment for folks from the second population).
Amazingly, after sending initial recruitment emails to both populations of 50, I had received exactly 6 responses from each group, for a total of 12. I used two different strategies for recruitment. In the first instance, I had a long formal email of 800 words. It included the faculty recruitment email cut and pasted at the bottom. I did not include a link to the survey. For the second phase, I cut the email down to 200 words, included the faculty recruitment email as an attachment, and included a link to the survey. In both cases, amazingly, 6 people responded. The only difference was that in phase two when I included the link to the survey, I received the responses faster.
Green sea turtle gets relief from “bubble butt” syndrome thanks to 3D
printing
-
Boat collision left Charlotte stranded at the surface and in danger of
predation.
20 hours ago
No comments:
Post a Comment