Saturday, February 21, 2009

Survey Says

No messing around.  Let's get to it: the Purpose/Subjects/Data/Generalizations heuristic meets surveys.

As usual, they set up the discussion, explain the heuristic at work.  They do so through three studies: (referring to them as they do) a Texas study, Eblen's study, and Bamberg's study.

Surveys qualify as descriptive research and make descriptions of large research populations "possible with a minimum of cost and effort" by working with a sample part of the group.  Surveys work via broader synecdoche than case studies. 

The tables in this section have the power to stupefy.  Table 4-1 lists percentages of confidence limits based on projected sample size regardless of population size.  There are adjustments on the back end (see 4-2), but still the confidence scores are spectacularly confident. I would appreciate very much if, in class on Monday, someone would explain the equations at the bottom of page 58.

Oh, and subjects must be randomly chosen.  See table 4-4 for The Incredible Wachowski Brothers' Mystifying Number Table of Randomly Drawn Wonders.

A decision must be made between multiple choice or open-ended, the latter being fraught with ambiguity.  The logic here seems to run opposite of my instincts as a teacher: A,B,C, or D (none of the above) makes the empirical world go round.

Whichever method is chosen, the composition of the questions themselves is the most interesting step with surveys, requiring knowledge of the field, anticipation of the audience, a review of past trends in similar studies, not to mention critical thinking and writing skills.  My bias of course is showing: this is the 1/2 step that doesn't demand much counting.

Simply said, your n (sample size) should be larger than your K (variables).  If not, you're doing a case study.

Then comes the classification of the type of data collected: nominal, interval or rank order.  Sprinkle atop some mean, range, standard deviation, and variance, and you're ready for...

Cause-and-effect statements should be avoided, and representation should only be extended broadly if the sample was randomized (for help, see the IWBMNTRDW table, 4-4).

Stay tuned, as I revert to metonymy to explain the synecdoche of surveys. First, I need to listen to a conference.

Ok, break in the conference.  As an example of Lauer and Asher at work, I'll try to work Wolfe's study through the various categories.

--purpose: to discern "which, if any, annotations will be useful to students" (301).

--subjects: 122 students, enrolled in composition courses, which were volunteered for study by their instructors.

--data collected by post-writing questionnaires to gauge recall, source text analysis to gauge mimicry, and student essay analysis to gauge writing quality.  An additional questionnaire was issued when the two test groups produced radically different results.  
---The data analysis was directed at "the effects of annotations on memory, attitude [subdivided into local and global], process, and written products" (307).

--generalizations: here the study goes into the no-no zone of discussing cause and effects (predictions): "Continued exposure to a variety of readers' annotations might help students, over time, develop better models of how readers interact with texts to construct meaning" (323).  The problem here seems to be the the "better" outweighs the "might."  In other words, if I'm following what these modes are supposed to do and supposed not to do, the descriptive study should limit itself to proving that the variables exist, not necessarily what they do.  The study could argue that it is justified in this by the essay analysis portion of the study (which went beyond survey).  However, this portion of the study was crammed into a time period that the analysts themselves admit was too short. 

I'm trying to find a problem here because, I think, that's the point.  Have I missed the mark (or, rather, have I missed the mark that missed the mark)?

No comments:

Post a Comment