Tuesday, October 2, 2007

Memo to Bill HD Re: Update on survey and request for advice

MEMO

To: Bill Hart-Davidson, Chair

From: Martine Courant Rife, Student

Date: October 1, 2007

Re: Update on survey and request for advice

Overview

The purpose of this memo is to update you and seek advice regarding next steps for the survey. The survey was supposed to be finished between 9/17 and 10/7. That won’t be enough time at this point. The interviews were supposed to take place in November. I still hope that I can do those the same semester I am collecting survey data.

TABLE 1: SURVEY METHOD AND RESPONSE

Total Census = 200

POP 1, N= 52

POP 2, N= 50

date

# survey responses

date

# survey responses

First Recruitment Email (pop1, long version; pop2, short version)

Monday, 9/17 pm

6

Monday, 9/24 pm

9 (no students)

Follow-up Email with survey link to Named Contacts

unknown

N/A – survey link sent with first recruitment email.

*First Reminder Email to those not heard from at all.

Thursday, 9/27

+4=10 (one student-no contact info)

10/8

First Reminder Email to named contacts

10/8

10/15

Second Reminder Email to those not heard from at all.

10/8

10/15

Second Reminder Email to named contacts warning of survey close date

10/15

10/22

TOTAL RESPONSES

10

9

19

*For each reminder, the population has been funneled. For example, from some program directors, I received contact names after the first recruitment email. Those individuals already received a follow-up email for population one. A few individuals stated they did not want to participate. Their names were removed from the list. So the first reminder email went to those I had not heard from at all. I’ve heard from some of them. Therefore, the second reminder email will go to those left who I have not heard from at all.

Also, on the N’s, this is estimated for the responses because I’m going by how many people contacted, how many people take the survey. I’m not looking at how many programs responded – because while people can select their program on the survey, they don’t have to. The idea for the population is that program directors represent programs, but not for purposes of the N’s.

Discussion

At this point I have no students to interview, because only one student responded, and didn’t leave contact information. That’s a serious problem, because my study is dependent on having students to interview.

End date for the first two population surveys is planned for October 30, 2007. This date is relevant in order to plan reminders. I chose not to tell participants this unless they ask because if they know the survey is going to be open for a period of time, they are less likely to take it soon, and may totally forget to do so.

One of my main goals is to sort writer-participants for purposes of selecting interviewees, and then analyzing how one’s overt knowledge and confidence maps onto one’s rhetorical inventive strategies when composing – as those strategies depend on tacit knowledge.

Another of my goals was to take a snapshot of the current status of knowledge and certainty of fair use/copyright within the TPW community for purposes of curriculum planning. I wanted to know how important this community thought fair use/copyright was. I wanted to know if their speech was chilled. (It appears to be nonexistent rather than chilled). However, if this is not possible then this goal will have to remain unattainable. I can of course theorize about what causes a failed attempt to gather this data. When reviewing the members of the population (TPW programs), I visited almost every programs’ webpage. I learned much by doing this. Some TPW programs and/or writing majors have no digital writing component in their curriculum. Perhaps my research question should have been: Do you teach digital writing in your program? Do you want to teach digital writing? Do you think there’s any value to teaching digital writing? But that is a different research project. I can easily make the argument that the teaching of digital writing in TPW and in writing major programs, is inevitable. At some point, fair use/copyright will have to be acknowledged and addressed in curriculum. Perhaps the field is just not there yet. There are also issues with my “social science” methods, and documented debates within the field about the use of more traditional social science research methods. Objections to such methods are legitimate – have legitimate backing. But that shouldn’t mean such methods are shunned.

I learned about doing surveys in a quantitative research class at Michigan State University, taught by an individual in Communications –a very quantitatively focused discipline at MSU. We read example survey studies of professional populations. In one study I read (done in 2000), a paper survey was sent to 506 environmental journalists, with a response rate of 50.3% (Babbie says you need 50% to publish, but this is not true in language and learning disciplines – lower response rates are published). I expected that if journalists would respond at this rate, surely program directors of TPW – a discipline as focused on writing as journalism, would respond at an even high rate. But the critical difference is that in journalism research, the survey method is tried and true, and accepted within the field. Such research is written up and depended on by journalists. This kind of research generates funding, and financial support for initiatives like the Knight Foundation. I’d guess that environmental journalists know this – they know that their existence depends on research.

In contrast, TPW is just beginning to seek grant funding – humanities-English department situated programs are less likely to do so. So, there is no inherent benefit to supporting programmatic research. (With all the scholarly discussion on professionalizing and legitimizing the field, there certainly is benefit, but no one’s attending to this). In the environmental reporting study, it was determined that working conditions and funding for environmental reporting lacked. Showing something like that makes an argument for philanthropists and others that money is needed to support this kind of activity. These kinds of dots are not connected in TPW. The funding tail is not yet wagging the disciplinary dog such that individuals in the field see how programmatic research will benefit them. Improving curriculum is apparently not a good motivator. (If we showed that there was massive ignorance, but a lot of interest regarding copyright/fair use, that would provide an argument for MacArthur and other funding agencies, even organizations of copyright holders, to fund educational interfaces, more research, etc.).

I found out I don’t really like doing this type of national survey because I feel too disconnected from the participants. I feel guilty because I am imposing. Yet, if someone sent me this survey in this context I would complete it immediately. I would have my students do it. But that’s because I’m also trying to teach them about research and help them understand their position in the academy. I also think it’s important to support other’s research. I always complete surveys when people send them around because I like to read them and see how they are constructed. But as far as the downside of administering this survey, I don’t feel connected to the participants. I want see their faces and talk to them, because I don’t really trust the survey data completely. I don’t like the way humans are removed from the scene. I’d feel the same way about an experiment because in an experiment, usually the humans are not really present in the data. You don’t talk to them. You just do things to them and see what happens. It was a lot better when I did the survey at MSU and I knew the people and the program. That seemed more “real.” I’m really worried about the time going by because if I have to revise the IRB approval that will take me into the next semester, probably. I know that delays happen in research.

Possibilities for Continuing the Study

The options as I see them are as follows (these are prioritized on the basis of my preference):

  1. I pulled a third population of N=47 using the exact same methods as for the first two populations I created. I very carefully have documented this. MSU *did* come up in this third population. If I send the survey to this third population I will have contacted 75% of the population. (If you want to see the list of the first two populations, I’ve attached those lists). With the first two populations, I’ve contacted already 50% of the population. One alternative is to send the survey to this third population. Danielle DeVoss is the contact for MSU. Advise if there’s anything special I should do – otherwise I send her the same email as everyone else. I’d send Danielle an email ahead of time telling her I’m doing it, or asking maybe, unless you can/should do it. That proved effective. Time is an issue but there’s not much I can do about that. If I have to collect data in the spring, I will start writing some of the chapters now. This survey is really time consuming. If I was doing interviews I’d be done. With this option, IRB doesn’t need to be revised because it’s still a random selection, permitting anonymity in the survey, following the protocol as approved. I could also revise the IRB approval to allow classroom visits like I did in the pilot. The fact I am not recruiting in person with respect to the students, or not contacting people directly by email, is problematic. I seriously don’t know how I will ever get anyone to agree to be interviewed if I don’t ask them upfront, or have someone else advocating for me. [That woman I met at XXX’s party – from the XXXX – she did a survey to all students at XXX, got release of their names, and bought one Ipod that was given away as in a contest to participants. She got a really high N and attributes this “prize” to that. I took her survey last spring. However, I did not tell her it wasn’t because of the Ipod;, it was because I, and every other student I know, is very dissatisfied with the XXXX. {edited some stuff out here} You see, if copyright law is not problematic, people have no reason to care about it].
  2. Revise the IRB approval and focus on the MSU program—don’t contact the other parties in the third population. The MSU location solves a lot of problems because it gives me proximity to my interviewees. I will discuss the initial survey attempt in my dissertation of course. I still can accomplish all my other research goals – but I will not be able to make statements about TPW in general. This would change my study design more than #1.
  3. Revise the IRB approval and focus on MSU-LCC – this provides contrast. I’d need to do classroom recruiting. This would not be TPW because I’d have to go to digital arts at LCC. Our writing students don’t do digital writing on the WWW other than facebook or what they do on their own. This would push my data collection in the spring.
  4. Contact the entire population regarding the survey – all 200 programs. This wouldn’t solve much because I’d still get an overall low response, I’m sure. It doesn’t necessarily allow me to generalize more because participants are self-selecting anyway, and a larger N doesn’t make that fact any less so if I do the entire population and still get only 10%. This option also lengthens the discomfort of administering this survey remotely.
  5. Revise the IRB approval and get permission to send out to list serves like ATTW, WPA, TechRhet, CCCC-IP, ATTW, MSU RW, PW, MA list serves and the copyright lawyer list serve I’m on. My main concern here is that it makes me look like a “loose” researcher. I don’t want that. My other concern is that I then have no control over my population. It is no longer the TPW community. I could create a filter question, but still, my “frame” is very fuzzy. I personally would have a hard time generalizing findings especially because when I actually attempted to administer the survey to the community that I’m studying, it didn’t work out too well.
  6. I could do both #4 and #5, but then it turns into a free for all.
  7. I don’t know what 7 is. Any ideas? I just want some data. I still like my study design and I wished it would work. But the fact that it was rather unusual to do a survey in rhetoric & composition should have been a warning to me that I’d experience resistance on multiple levels, even to the extent that I am prevented from collecting data. I am also aware that I personally lack audience awareness. People may not know what I’m talking about. I helped a colleague log onto a digital interface today. I had to sit with her. I mean, help her create a user name and password – logging on. She needed help logging on. She’s been teaching composition longer than I.
  8. The main thing is I need a few people to take the survey so I can interview them. So the question is how to accomplish that most efficiently without undue burden to the participants. You have to think back to Haller’s article – her study (Rhetorical Invention in Design). She went to one class and used teacher recommendations to locate her study participants. The teachers and TAs told her that this one group in the computing documentation class was the most advanced – so that was the group she studied. The survey I have is trying to accomplish this same thing – it’s trying to tell me who to study. So if you think about it like that, I could just use the survey with a small population, a class or two, etc.

No comments: