Saturday, February 21, 2009

Survey Says

No messing around.  Let's get to it: the Purpose/Subjects/Data/Generalizations heuristic meets surveys.

LAUER & ASHER
As usual, they set up the discussion, explain the heuristic at work.  They do so through three studies: (referring to them as they do) a Texas study, Eblen's study, and Bamberg's study.

--purpose
Surveys qualify as descriptive research and make descriptions of large research populations "possible with a minimum of cost and effort" by working with a sample part of the group.  Surveys work via broader synecdoche than case studies. 

--subjects
The tables in this section have the power to stupefy.  Table 4-1 lists percentages of confidence limits based on projected sample size regardless of population size.  There are adjustments on the back end (see 4-2), but still the confidence scores are spectacularly confident. I would appreciate very much if, in class on Monday, someone would explain the equations at the bottom of page 58.

Oh, and subjects must be randomly chosen.  See table 4-4 for The Incredible Wachowski Brothers' Mystifying Number Table of Randomly Drawn Wonders.

--data
(collection)
A decision must be made between multiple choice or open-ended, the latter being fraught with ambiguity.  The logic here seems to run opposite of my instincts as a teacher: A,B,C, or D (none of the above) makes the empirical world go round.

Whichever method is chosen, the composition of the questions themselves is the most interesting step with surveys, requiring knowledge of the field, anticipation of the audience, a review of past trends in similar studies, not to mention critical thinking and writing skills.  My bias of course is showing: this is the 1/2 step that doesn't demand much counting.

(analysis)
Simply said, your n (sample size) should be larger than your K (variables).  If not, you're doing a case study.

Then comes the classification of the type of data collected: nominal, interval or rank order.  Sprinkle atop some mean, range, standard deviation, and variance, and you're ready for...

--generalizations
Cause-and-effect statements should be avoided, and representation should only be extended broadly if the sample was randomized (for help, see the IWBMNTRDW table, 4-4).

Stay tuned, as I revert to metonymy to explain the synecdoche of surveys. First, I need to listen to a conference.

Ok, break in the conference.  As an example of Lauer and Asher at work, I'll try to work Wolfe's study through the various categories.

--purpose: to discern "which, if any, annotations will be useful to students" (301).

--subjects: 122 students, enrolled in composition courses, which were volunteered for study by their instructors.

--data collected by post-writing questionnaires to gauge recall, source text analysis to gauge mimicry, and student essay analysis to gauge writing quality.  An additional questionnaire was issued when the two test groups produced radically different results.  
---The data analysis was directed at "the effects of annotations on memory, attitude [subdivided into local and global], process, and written products" (307).

--generalizations: here the study goes into the no-no zone of discussing cause and effects (predictions): "Continued exposure to a variety of readers' annotations might help students, over time, develop better models of how readers interact with texts to construct meaning" (323).  The problem here seems to be the the "better" outweighs the "might."  In other words, if I'm following what these modes are supposed to do and supposed not to do, the descriptive study should limit itself to proving that the variables exist, not necessarily what they do.  The study could argue that it is justified in this by the essay analysis portion of the study (which went beyond survey).  However, this portion of the study was crammed into a time period that the analysts themselves admit was too short. 

I'm trying to find a problem here because, I think, that's the point.  Have I missed the mark (or, rather, have I missed the mark that missed the mark)?

Saturday, February 14, 2009

Case in Point

The first case study I remember reading is "Shut Those Thick Lips!" A Study of Slum School Failure by Gerry Rosenfeld.  The book was given to me by an anthropologist with whom I was team teaching.   A label marked it as from the "Case Studies in Education and Culture" series; otherwise, I might not have labeled it that myself.  It attempts to convey the disadvantage done to students in an impoverished, urban school, concluding that there is a perpetual exchange of condemnatory typing between teacher and student, student and teacher.  The teacher thinks the kids will never learn.  The students think the teacher is out to get them.  

Being written in the seventies about the sixties, the book's thesis drew attention.  It described the difference of these "slum" schools in a new way.  From reading Lauer and Asher, I can graft onto this the language of variables.  Rosenfeld's purpose was to describe them.  L & A would add that descriptions raise questions for future research.  While this more precise definition of case studies fits with Rosenfeld, I remember, also, the highly rhetorical nature of his questions.  He was clearly invested in them, in a different way than say Flower, Hayes, and Swarts, whose "provocative question" wonders how widely we should apply their "scenario principle"(a revision method that relies on a human-centered network) (56).  I don't mean to undermine questions proposed in either study; rather, I too am attempting to describe the variable of their difference.  

For Rosenfeld that seems to be his sense of advocacy.  His audience needs to take notice of his question.  Whereas Brandt's descriptions of Midwestern dairy farmers are rich with detail, Rosenfeld's descriptions are designed to evoke an emotional response.  One gets a sense of this strategy simply from his title.  He will reveal an injustice, an atrocity.  Part of his commitment emerges from his involvement as a teacher at the school being studied.  Brandt, Flower et al. have much more apparent critical distance; but, then again, perhaps not.  Flower is, likewise, a teacher of the subject she studies.

We have been asked to reflect on both the appropriate purposes of case studies and the kinds of generalizations possible.  Rosenfeld, paired with the readings we were assigned, has led me to these questions for future study:

-Is the kind of description that belongs to narrative—one which does not seek to submerge the signifiers of its rhetorical design—appropriate for case study?

-How much can a case study appeal to pathos before the study becomes something different, something we must seek another label for?  When does it disqualify itself?

-Is an author's complicity in the case being studied inversely proportionate to the level of generalization possible?

-To what extent can social justice be a part of one's purpose?  Does a case study become inappropriate with this as its starting point?

Brandt offers a perspective that seems to be important when approaching these questions.  His close analysis, he claims, is not, like Flower and Hayes, to "predict particular outcomes, but to understand better the struggles that economic transformations bring to the pursuit of literacy.  With this knowledge, educators might be in a better position to find ways to compensate for tears in the social fabrics that these transformations leave behind" (377).  The passage is remarkable for two reasons: first, it is a naked call for empathy; second, it would deploy this empathy as a restorative tool, as a means of achieving social justice.  If this is an acceptable purpose for case studies, perhaps, then, the questions that remain are just a matter of language.



Saturday, February 7, 2009

Avatar DNA

The IRB's testing module for internet-based research identified concerns that were similar to those in the unit on "Genetic Research in Human Populations."  The most significant challenges for each field emerged from the "problem" of information "that can be stored, transmitted, and analyzed with ease and power."  

Some common questions seem to be: Should information stored for one purpose be repurposed for another?  If the information in these samples holds stigmatizing data, how can that data be protected without blocking the flow of other critical, but unthreatening data?  How can we identify what may be a stigmatizing indicator before it becomes stigmatizing?

It may be worth noting here that genetic testing and internet-based research are not mutually exclusive methodologies.  In fact, the former can more easily reconstitute itself into a field of study (genetics), whereas the latter is still in its nascent days of formal study.  Moreover, with increasing frequency, genetic testing and DNA services are going online and appealing to a mass market.  

Take, for instance, the image above from DNA Portraits, a company that offers its clients "the opportunity to enter the world of unique, personal art."  All one needs do is request the company's "collection kit," send in a swabbing of cheek cells, then choose from "25 custom combinations" to generate one's very own DNA art piece.

If we admit that our presence online leaves behind a kind of DNA, not an equivalent, but certainly a strand of data that can be parsed for a variety of details about our makeup, how many of these strands do we inadvertently leave behind each day?  And if we have not sent away for a "collection kit," who, if anyone, has the right to collect them?

The IRB, in establishing its rules for the ethical treatment of human subjects, has established standards designed to fortify subjects' expectations of privacy, and, at a more fundamental level, their control over their own subjectivity (simply being present in a public space, physical or digital, should not automatically make you subject to federally sanctioned observation, although the "should not" here is continually being eroded).  But the more we swab ourselves, the more difficult it becomes to protect the integrity of our personal information.

Yet, to switch our sympathies from subjects to observers, there seems to be a more pressing concern for online research.  Unlike DNA, the analysis of which can boast 99% reliability, the cells left online are wonderfully prone to manipulation.  Web 2.0 is here, multiplying social exchanges.  But so is the age of the avatar.  What we exchange is not necessarily ourselves.  The traces we leave behind are modified in intricate and often contradictory ways, and with them, this new manner of mediation seems to have developed a natural resistance to standardized testing.

The scientific communities as well as those who govern them face a slippery beast when data, subjects and experiments migrate online.  If virtuality can indeed be regulated, I look forward to logging back in and taking the test.