Skip to main content

Currently Skimming:

Web-Based Data Collection
Pages 183-197

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 183...
... I'm going to give a kind of fast review, in 25 minutes or so, of issues in Web-based data collection. I'll look mainly at measurement issues, and a lot of what I'm going to tell yol1 is information I stole from Mick Collper.
From page 184...
... There are what might be called general invitation samples, volunteer panels, probability samples, and then as a special case of probability samples intercept samples, and I'm going to talk briefly about those four. The general invitation samples are simply Web sites where you can go and fill out a survey, if that's your cup of tea.
From page 185...
... There are some probability samples used for Web surveys. It is possible to get a list sample for example, all the e-mail addresses of students at a single university.
From page 186...
... So one set of issues is: it's hard to sample in the absence of a frame, and, if you want a sample of the general population, there's severe coverage problems. A second set of problems with Web surveys is that the response rates tend to be low, and this slide which I stole from Miclc shows the response rates in various mode comparison studies.
From page 187...
... The completion rate is the proportion of respondents with respect to the entire sample while the response rate is with respect to the number of eligible participants. In this case, the completion rate denominator for group D would include those firms for whom Web access was not available; lil
From page 188...
... And, in part, that may reflect an absence of an empirical basis for knowing how to get a high response rate on the Web. This is a new mode of data collection; we don't lcnow what the right schedule of follow-ups is, how to give advance notice, and so on; all that technology which is well established for mail surveys and telephone surveys isn't there yet for the Web.
From page 189...
... And that's what I call the "static format." And the other tradition that has emerged is that Web surveys are lilac computer-assisted surveys, so they ought to follow the conventions that CAPI surveys and CATI surveys follow. And I'll talk about that a little more.
From page 190...
... ESee Figure II-25. ~ And so the idea is that a lot of Web surveys add various humanizing touches to malce the survey more file.
From page 191...
... People begin to react to these interfaces as though they were interacting with a person. SO, for example, in the survey context, you may get social desirability bias effects just lilac you would in interacting with a human interviewer.
From page 192...
... Mick and I have done three Web experiments that compared a nelltral interface with interfaces that featured human faces and various other interactive features. And we were looking at the various possible survey response effects that might emerge when you use this humanized interface, including people changing their answers to gender-related questions to appear more feminist if they had a female picture.
From page 193...
... So there is some evidence suggesting that adding humanizing cues to an interface can change things a little bit; we find little support for the social presence hypothesis, this idea that people react to an interface as they 48 The metrics referred to here are the Marlowe-Crowne Social Desirability Scale (Crowne and Marlowe, 1960) and the Balanced Inventory of Desirable Reporting (Paulhus, 1984; also lcnown as the Paulbus Deception Scales)
From page 194...
... So people attend more Little League baseball games than professional sporting events, but they're more lilcely to think of a professional sporting event when you ask, "Did you go to a sporting event last year? " You're not lilcely to think of the neighborhood Little League.
From page 195...
... Some of these design issues reflect that there are two separate and contradictory design traditions: one says that Web surveys are lilce usual computer-assisted surveys and the other says that, no, they're lily typical mail surveys, lily paper questionnaires. In addition, the visual character of Web surveys raises some interesting issues.
From page 196...
... And what we did is deliberately send out this is to students at Michigan a Borders gift certificate; Borders is headquartered in Ann Arbor. And when they completed the Web survey they got an ID and could immediately claim a $10 gift certificate.
From page 197...
... For instance, we did a study this is actually using interactive voice response, telephone Audio-CASI. Well, actually, we used the long-form questions from the census, the decennial.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.