Saturday, August 23, 2008
Chaim Perelman talking about how justice is the first goal, and adherence to legal technicalities is the second goal
"It is a common, and not necessarily regrettable, occurrence even for a magistrate who knows the law to formulate his judgment in two steps: the conclusions are first inspired by what conforms most closely with his sense of justice, the technical motivation being added later. Must we conclude in this case that the decision was made without any preceding deliberation? Not at all, as the pros and cons may have been weighed with the greatest care, though not within the frame of considerations based on legal technicalities. Strictly legal reasons are adduced only for the purpose of justifying the decision to another audience. they are not adduced, as Mill suggests in his example for the purpose of making an expert formulation of the general maxims of which the governor had only a vague idea. Mill's scientism makes him think of everything in terms of a single audience, the universal audience, and prevents him from providing an adequate explanation for the phenomenon."
This passage reiterates what I found in my study as far as writers working towards justice in their composing decisions without necessarily referencing legal technicalities. Later, rationales or understandings emerged in order to fit the end decision into whatever the law provides. The sense of justice comes first, then the understanding of the law is fit into that by the writer. So whatever the law actually is, if we were ever able to know that, is fairly irrelevant. Instead, what matters is how the law is enacted, and it is enacted backwards from what one might expect. Those who compose laws doubtfully intend for individuals to just ignore them. You see how agency issues crop up in that the law as written has little agency.
As for legal reasoning, the way Perelman describes the magistrate's two step process also fits within our understandings of good lawyering, at least at the reactive stage rather than the planning stage. If a crime has been committed, the lawyer has to squeeze the law to fit the facts in the best way possible in order to argue his client's innocence. That's his job - in order to be "just" in the US legal system. On the other hand, lawyers who are advising clients *before* the possible crime will read across legal precedents and then extract a course of action that hopefully avoids the crime in the first place.
Saturday, August 16, 2008
What percent of a population do I need to select in my random selection in order to have adequate representation of the population?
On the percentage issue - Juswik et al. (Writing Into the 21st Century An Overview of Research on Writing, 1999 to 2004, Written Communcation, 2006) in their article discuss percentages for purposes of validating coding - a 10% sample (assumed then to represent the larger population).
"Inter-rater reliability on the exclusions was high at 97.5% based on a sample of 10% of the studies." Juzwik et al p. 460
"A sample of 10% of the studies was taken for an exact inter-rater reliability on the coding of the studies that were included in our database. This reliability check determined that the initial coder and the reviewer agreed on 97.5% of the articles that were included in the study, on 96.0% of the age codes, and on 91.0% of the problem codes." Juzwik et al p. 463
In "The ‘Doing boy/girl’ and global/local elements in 10–12 year olds’ drawings and written texts," Qualitative Research, 2007, by Pat O'Connor, University of Limerick, he used about 10% of the population of texts to study but did not say it in terms of percentages:
"In this article, the focus is on a randomly selected sub-set (n = 341) from the total sample of 3464 texts written by those aged 10–12 years." (p. 234)
Another useful article on the topic is:
Collins et al.
A Mixed Methods Investigation of Mixed Methods Sampling Designs in Social and Health Science Research
Journal of Mixed Methods, 2007
On page 273 is Table 2 which lists typical sample sizes and some rationale.
My further explanation included the following with respect to my dissertation:
That percentage (I used of 20%) in part reflects that I wanted to get over 400 participants in order to have certainty of + or - 5%. I was estimating how many teachers and students I'd get from each program - I had about 250 programs. So if I started with 20% and 10 people from each program responded, that would give me 500 respondents. However, as summarized below, I had to do about 60% of the population through the use of insurance samples.
That said, when I was working with texts at a research center, it was understood that you needed at least 10% to have a chance at having a representative sample. But, I've never found this rule in writing because it's more complex than that, ultimately. It depends on how much variability there is in the entire population. The more variability, the larger your sample size should be because then that variability will have a greater chance of being captured. As always, whatever you do you have to be able to contextualize your choices and findings in the data analysis.
The issue is how big do you need the sample to be in order to get a representative sample? -- A sample that represents the same characteristics as in the entire population. In my study, the only characteristics I had available to check this was program type - PhD, Masters, Four Year, Two Year, and Certificate.
I was able to show that the final group of respondents fairly closely reflected the larger population's characteristics. Except PhD programs over-responded. You just have to talk about that then in the data interpretation.
The larger the sample, the greater the chance you will have a representative sample. So if your entire population is 50% women and 50% men, and your sample ends up being the same, you know that at least in this one respect, it's representative.
I also had two additional "insurance" samples selected in case I got skewed responses on the first try, or in case I for some reason had a low level of response. Ultimately, I had to select three phases of 20% of the entire population because of lack of response with my initial attempt at 20%. I believe I was close to 60% and was concerned that I'd end up having to select the entire population rather than the randomly selected population. Depending on your situation, I recommend thinking about having one or two insurance samples ready to be selected in addition to the initial sample.
The percentage you choose might also be matched against the number you want in the end (confidence level) - based on that table in Lauer and Asher's book -- page 58.
From Earl Babbie: "The larger the sample selected, the more accurate it is as an estimate of the population from which it was drawn." p. 193 10th edition. This is a book I'd recommend for any student working on this kind of research because Babbie explains random selection and lots of other research methods in really easy to understand terms.
"The kind of sampling procedure used also affects sample size. As we gave mentioned, for the same level of precision, stratified samples usually require fewer people than the simple random sample, and cluster samples usually require more" (Survey Research, 2nd edition, Backstrom and Hursh Cesar).
In my study, I did a stratified sample taking 20% of each of the five categories. That was 20% of the whole as well. But I ended up having to do this 3 times in order to get close to my desired 400.
Later I went on to say:
I've never seen a publication actually give a minimum percentage of the population that needs to be selected in order to have a representative sample. I would say that you need between 10 and 20% of the entire population. But level of confidence goes by the *number* in the sample rather than the *percentage* of the entire population. However, percentage of the entire population does matter because the larger the percentage obviously the more representative. I mean, it can never be a simple solution like a percentage because it all depends on the context . . .
The area that might be informative is mass media or comm. arts. I'm attaching the sampling chapter from Riffe, Lacy, and Fico's book. It might be helpful in explaining all the nuisances of your question and why no one is willing to just say you are safe if you pick at least 10%. The language I underlined on page 105 might be relevant. They actually mention 20% as being a magic number in that if you have it (or more than 20%), your confidence level goes up even though your actual number of samples is not that high. Lauer & Asher talk about this as well in their book when they discuss the "correction factor" pp.58-59-60 . Riffe et als. book is cited as well in _What writing does and how it does it_ in case you need to tie any of this into our field.
In a study I worked on previously, Stewart Whittemore and I took 10% of the entire population of texts in order to get inter-rater reliability. I remember being in the same quandary there with respect to how many texts I needed to select in order to have a reliable coding scheme. I could find nothing firm in writing. Bill Hart-Davidson just said 10% minimum, if I remember correctly-- but those texts were very very homogeneous because they'd been written based on a prompt. In part it was ultimately an issue of labor, time, and money, like a lot of these decisions. I don't think I've ever seen anything written up with less 10%.
What is an account? It is typically a _text_, a small ream of paper a few millimeters thick that is darkened by a laser beam. It may contain 10,000 words and be read by very few people, often only a dozen or a few hundred if we are really fortunate. A 50,000 word thesis might be read by half a dozen people (if you are lucky, even your PhD advisor would have read parts of it!) and when I say ‘read’, it does not mean ‘understood’, ‘put to use’, ‘acknowledged’, but rather ‘perused’, ‘glanced at’, ‘alluded to’, ‘quoted’, ‘shelved somewhere in a pile’. At best, we add an account to all those which are simultaneously launched in the domain we have been studying. Of course, this study is never complete. We start in the middle of things, _in medias res_, pressed by our colleagues, pushed by fellowships, starved for money, strangled by deadlines. And most of the things we have been studying, we have ignored or misunderstood. Action had already started; it will continue when we will no longer be around. What we are doing in the field – conducting interviews, passing out questionnaires, taking notes and pictures, shooting films, leafing through the documentation, clumsily loafing around – is unclear to the people with whom we have shared no more than a fleeting moment. What the clients (research centers, state agencies, company boards, NGOs) who have sent us there expect from us remains cloaked in mystery, so circuitous was the road that led to the choice of this investigator, this topic, this method, this site. Even when we are in the midst of things, with our eyes and ears on the lookout, we miss most of what has happened. We are told the day after that crucial events have taken place, just next door, just a minute before, just when we had left exhausted with our tape recorder mute because of some battery failure. Even if we work diligently, things don’t get better because, after a few months, we are sunk in a flood of data, reports, transcripts, tables, statistics, and articles. How does one make sense of this mess as it piles up on our desks and fills countless disks with data? Sadly, it often remains to be written and is usually delayed. It rots there as advisors, sponsors, and clients are shouting at you and lovers, spouses, and kids are angry at you while you rummage about in this dark sludge of data to bring light to the world. And when you begin to write in earnest, finally pleased with yourself, you have to sacrifice vast amounts of data that cannot fit in the small number of pages allotted to you. How frustrating this whole business of studying is.
And yet, is this not the way of all flesh? No matter how grandiose the perspective, no matter how scientific the outlook, no matter how tough the requirements, no matter how astute the advisor, the result of the inquiry – in 99% of the cases – will be a report prepared under immense duress on a topic requested by some colleagues for reasons that will remain for the most part unexplained. And that is excellent because _there is no better way_.
From Reassembling the Social, pages 122-123
Thursday, August 14, 2008
The Michigan Court of Appeals just held, in an opinion that will be published (as in formally published in a court reporter book -- opinions which aren't published are actually "published" but only informally. Unpublished opinions are not supposed to be precedential however they are used all the time to make arguments), that a release parents signed on behalf of their child was not necessarily binding. It involved a kid jumping off a slide and breaking his leg, after properly using the slide 5 times. This was for a child's 5th birthday party. The facility stated it would have supervision and that the facilities were safe. Yet they had parents sign a release. The trial court had held against the parent and dismissed the case. But the Michigan Court of Appeals reversed and remanded back to the trial court.
OK, now I have to find a way to connect this to the theme of my blog. It's this. Who gets to author the child? Who can bind the child? In this case, the Michigan Court said basically that a parent has no authority simply by virtue of the parental relation to waive the child's claims. This is really interesting and I always have kept it in the back of my mind when I sign all those many, many releases I sign for school and sporting events. The releases might not be enforceable. The case also raises issues of violating the Michigan Consumer Protection Act because the party provider misrepresented what it was selling, possibly. The Michigan Consumer Protection Act is really useful. I almost think I should teach it in FYW because I know my students tell tales all the time of how they were ripped off and I'm always seeing violations of the MCPA.
One of my students wasn't hired for a day care job because she wore hearing aids. Clearly this was a violation of the Elliot Larsen Civil Rights Act.
My point is, as stated in my dissertation, the law has questionable agency. Some of the consumer protection laws, and laws that protect civil rights, I really think the average citizen would benefit from being pretty familiar with them. And if you're someone who drafts releases or contracts, well, there's some ethical as well as legal issues to think about.
The liability case is here: http://www.michbar.org/opinions/appeals/2008/081208/40179.pdf
Wednesday, August 13, 2008
Timmer describes the case:
"The case was filed by a literary agent, Barbara Bauer, who apparently ran afoul of a small horde of Internet users . . . it seems likely her problems started when her name appeared on a list of the 20 Worst Literary Agents, hosted on the now-defunct site 20worstagents.com. According to accusations made there, Bauer was on the list because she'd inflated her credentials and never successfully closed a deal; she was also called a 'scam artist' and a 'con.'"
Other blogs picked up this discussion and exaggerated some of the terms used to describe Bauer. " Bauer quotes different blogs as referring to her as 'that lunatic.'" Eventually some of the statements about Bauer circulating on the web showed up in Wikipedia. This is how Wikipedia became a defendant in Bauer's lawsuit.
Section 230 of the Communication Decency Act says: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
Tuesday, August 12, 2008
Ardia's posts are going to cover the material in the Citizen Media Legal Guide.
Friday, August 8, 2008
According to the Globe article: "The case is not unprecedented, but it is a reminder that anonymous postings on the freewheeling Internet can be traced, legal analysts say."
The article goes on the say: "The women say Ryan made sexually charged slurs about them on the Web, including a false claim that one of the them had a sexually transmitted disease. The lawsuit also says Ryan encouraged further attacks on the other woman and used anti-Semitic language."
During the last few months I've read a couple reports as well as a book that discusses problems with developing appropriate sense of ethics in law students. This case is very revealing on that point. The alleged defamatory statements were made to AutoAdmit, "an Internet discussion board about colleges and law schools that draws 800,000 to 1 million visitors per month."
I won't go into the lurid details of what the posters said - you can read it in the Globe article if interested. I just think the situation might serve as an interesting case study for use in the writing classroom as far as "anonymity" on the web. And, this story also points to the fantastical view that women have achieved "equality" in the legal sphere. The studies that wonder why women lawyers often drop out of practice after having spent so much time achieving their degrees might find a partial answer in this story. I don't know. There's certainly a potential dissertation that could be developed from just this incident alone!
Wednesday, August 6, 2008