National Academies Press: OpenBook

Survey Automation: Report and Workshop Proceedings (2003)

Chapter: Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future

« Previous: Prospects for Survey Data Collection Using Pen-Based Computers
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

guess, you may want thirty months for that kind of device to emerge. The Pocket PCs right now are capable of doing what we’re doing with the Newton, and more, and thirty months from now they’ll be much stronger.

I don’t think that that perform[ance] factor will go away or—if it does—it will be back up to the Windows environment and you can port your software backwards with more opportunity. I think it … I’ve been uncomfortable since day one with the Newton because of the fact that there was no manufacturer and all the software developers were finding something else to do. Apple has continued to maintain the hardware, which has helped us. And we do, we send four or five a month back to be repaired. And without that things would be getting even more nervous now.

So I wouldn’t recommend saying, “Well, let’s buy a lot of them and they’ll be good for five years.” We’ve done that, but I wouldn’t like to do it again.

CORK: If we could hold off on other questions for right now, let’s thank Jay. We’ll take about a fifteen minute break while we set up for the last discussion. Jay and Marty willing, we can move the handhelds over to that table for people to touch and feel, and as long as we don’t confuse the Apple Newton with the Chicken Tuscan sandwich we’ll be fine.

PANEL DISCUSSION: HOW CAN COMPUTER SCIENCE AND SURVEY METHODOLOGY BEST INTERACT IN THE FUTURE?

CORK: To put a capstone on this workshop, what we wanted to do was to assemble a variety of different opinions and different perspectives to reflect on what was discussed at the workshop, and to provide their own views on the themes that were raised. And, then, to open the floor to some more general discussion. The moderator for this panel—I’ll let him handle the sub-introductions from there—is Mick Couper from the University of Michigan.

COUPER: Thanks. Well, I guess you can hear me—I hope you can hear everybody on the panel as we talk. I’m simply going to follow instructions, which I’m usually not good at but I’m going to do my best at doing so. And what we have is this. Each of the panelists will have just a few minutes to give their particular perspective—about five to seven minutes—give their perspective on material both of yesterday and today in the broad theme of, “Can we all get along?” The broad theme is: what can we as survey researchers—and all of the panelists here are really survey researchers—learn from computer science in the broad sense, partic-

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

ularly the material we’ve heard in the last day or two. We will probably range all over the map, and once we’ve spoken we’ll open things up for more discussion … “stump the chump,” or take comments or reactions from you—however you want to do it.

There’s probably no logical order to go through this so I’m going to do it alphabetically. And what I’m going to do is briefly introduce the panelists so you know who’s up here, then we’ll go in alphabetical order with remarks. So, first, if my alphabetical system works … to my right, Reg Baker from MS Interactive … That’s the B’s. Bill Kalsbeek from the University of North Carolina. Tony Manners from the Office for National Statistics in the U.K. And Susan Schechter from the Office of Management and Budget. So we will first hear from Reg Baker.

BAKER: Should I sit here?

COUPER: Sure … or if you want to stand …

BAKER: Do people have a preference as to whether we stand … does anyone care? [Audience rumblings] Stand on a chair? Everyone but Bill?

KALSBEEK: Tough group … [laughter]

BAKER: [moves to podium] Let’s see … Someone yesterday characterized this session as one group of people saying it’s rocket science and another group of people saying that it’s not. Which pretty much, sort of, I think typifies what happens when you get anybody together with computer science people, who are very solution-oriented people and who always enjoy looking at problems and saying, “Sure, we can solve that. It’s no big deal. It’s really very simple.” Whether or not that’s the case, we’ll talk about in a moment.

But I think, in my own case—because my self-image is that I’m kind of part survey geek and part gear-head—is that I just instinctively believe that sessions like this—people getting together and sharing perspectives—are a really good thing to do. And I hope that—you know, addressing the specific thing that the panel is supposed to talk about—that we can move down a road where maybe we make more progress in the future than we’ve been able to make in the past, in doing a better job of borrowing from what we might think of as the gear-head culture, the comp sci community. However you choose to describe it.

But first I think there are some issues that the survey folks need to address. Let’s call them readiness issues, if you will. Bob Groves talked the other day about having been in a number of sessions like this; I’ve been in a fair share myself. And, increasingly, I sort of am reminded of the old joke, “How many psychiatrists does it take to change a light bulb?” And the answer is, “Just one, but the light bulb has to really want to change.” So I think that, in this case, the survey group is the light bulb. And, you know, if you’re going to take this seriously, there’s a lot

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

of change that’s going to have to go on. And, in particular, a couple of areas occur to me.

In the opening remarks that Pat Doyle had the other day, she talked about how we used to have these “loosey-goosey” questions. And now, with technology, what we’re trying to do is we are trying add more precision. But I think there are all these symptoms that we still have a pretty loosey-goosey process that’s out there, trying to produce all this precision. And that the first order of business is really to go to work on that process and to try and clean it up. And that, a lot of what I hear, in terms of the sorts of the problems we have, are really problems of management and of management discipline—when I hear there are tools the people can use and that they choose not to use them. Or, we’re pretty good about doing something until there’s a lot of pressure and a deadline; then the wheels come off, and then everyone stops doing what they’re supposed to be doing. So there are some pretty straightforward management challenges in there that I think people know what to do.

But, more importantly, I think—we tend to get together and look at our part of the elephant. And there’s this whole business of a survey, particularly in the federal statistical establishment, that’s much larger than that little piece that we see. And, in particular, I think stepping back and realizing that not all of the problems we’re talking about here originate in Suitland … some of those problems originate in downtown D.C., some of those problems originate in Hyattsville. And that there’s a need to somehow get all of these people into the process, and to create a perspective for people working in it about what it is that we’re really trying to accomplish here. Because, at the end of the day, what we’re trying to do is we’re trying to produce usable, good quality, reliable data. And I think that’s easy to get lost when people are down in the bowels of these large surveys. So, I guess what used to be the appropriate metaphor in this town is, “It’s the data, stupid.” So when you look at decisions that you’re trying to make, you look at them in terms of: is this particular change going to contribute to that goal of producing reliable and usable data?

The second thing is, I think, in all this is to reduce complexity. I mean, I was very amused—as was everyone yesterday—at those McCabe graphs which showed the complexity and really drove home the point that complexity almost invariably leads to instability in systems. And so we need to think about reducing complexity. A good friend of mine who’s a consultant used to say that the thing about survey researchers I’ve noticed in working with them—he was not a survey researcher by trade—is that they seem to derive great pride from managing complexity when they should be deriving pride from eliminating complexity, from simplification. A case in point here, I think, is looking at the whole

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

“his”/“her”, “was”/“were”, “a”/“an” kind of problem. And again, “It’s the data, stupid.” What is that really adding? And how much time and energy should go into that kind of thing?

And I was reminded—which is a little off-track—but about ten years ago I remember talking to a group of folks at a research conference about the sorts of decisions we needed to make about what sorts of systems we were going to develop for interviewers. And the difference between informating systems and automating systems, and that those decisions really get down to: what is your philosophy of your workforce, and how are you trying to get them to use technology? Automating systems are sort of systems for the factory floor, blue-collar technologies—they’re all about controlling people’s behavior. Informating systems are more designed for professionals—really have more to do with empowering people, using technology to give people information so that they can do a better job at whatever it is we’ve chosen to do. It seems pretty clear that these systems have gone down the route of automating—of turning face-to-face interviewers into what we’ve done to CATI interviewers, which is the automaton route. And I wonder about the wisdom of that—it may be too late to turn back. But—and, by the way, as everyone knows, I’m certainly a person who’s in favor of eliminating interviewers altogether— but as long as we’re going to have them, let’s take advantage of them, and the richness of what’s traditionally been the relationship between interviewer and respondent. And let’s not build systems that get in the way and reduce people to simply reading things off the screen in an automatic kind of way.

As for the gearheads … what I urge on that front, for starters, is how about a little humility? I mean, let’s face it—it’s not an industry which is known for continually producing error-free, on-time, on-budget product. And, so, the degree to which we can sit and proselytize and help people understand that adopting our methods will solve all their problems can be a bit of a stretch. But that’s not to say that there’s not a lot of value there.

So, I think, really, that there are two points to be made. Number one is to be good consultants, to take the time and learn the business. I really wonder, as I sat through the discussion yesterday, whether this analogue of the interview and systems development is really as close as we like to think they are, and therefore what does that say about the applicability of the tools in the two different environments. I liked Mark Pierzchala’s talk last night, which—as he said—was to bring us down to earth. And it did a good job of that, I thought. But, in general, I think that this problem of people working in a software system like Blaise or CASES, and the degree to which that’s the same as doing complex systems development in C—with developers and professional programmers rather

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

than with people who may not be professional programmers, and with specification systems and everything else—that we have to look really closely at the degree to which those two problems track and have the same kinds of solutions.

And, then secondly, I think, we have to be careful not to portray methodologies as silver bullets. We are very good at—“we” being in this case the gearhead side of me—as somebody pointed out yesterday, at evolving methodologies once the current methodology becomes unsupportable. And so, yesterday, we got the latest panacea: extreme programming, and all the problems that it solves. And it sits out there—when you look at the face of it—and says, gee, we’ve got all this chaos but we don’t need to solve the chaos. We can work within the chaos and we can still produce good stuff. And I think that’s not really the point of extreme programming, any more than it was the point of rapid prototyping. It is, granted, it’s a way to come to grips with the fact that there’s pressure to produce enormous amounts of software, with functionality that mostly no one wants or needs, as rapidly as possible. [laughter] So you have people like, you know—not to pick on Microsoft, but it’s amazing that someone from Microsoft has been here and there’s not been a single snide remark about “Evil Empire” … [laughter]

COUPER: Yet …

BAKER: You know, I think Mick’s probably going to solve that problem when he gets up here. [laughter] But, nonetheless, I think you have to be really careful about recommending that people look at techniques like this because they take a lot of things that, well, us folks on the survey side don’t have in terms of infrastructure, in terms of skill, in terms of money to be able to invest in the kinds of systems we’re talking about. And, most importantly, in terms of the kind of culture that you have to create in order for methodologies like that to work effectively, not to mention how long it takes to get to where they do what it is that they need to do.

So, overall, I think that this has been really a fascinating conversation. Obviously, I thought most about it last night and this morning. But then, sitting here this morning and listening to three people talk about new technologies, it just occurred to me that everybody’s probably getting very excited … Well, people can’t seem to get too excited over Web; I don’t know why that is … [laughter] But I’m sure that everybody loved—I know that everybody loves those little handheld devices; they’re very cool, everybody likes that stuff. What I understood about the—what to me will forever be—the “Never Lost” system for interviewers was also very interesting. But that’s been kind of the extent of the transfer we’ve seen as far as technologies are concerned—the devices, the capabilities, but very little on the side of how, then, you have to change

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

your processes and how you have to manage development to really take advantage of what these technologies have to offer. That’s what this is and ought to be about, and it’s a direction in which I hope we can continue to go. So, thanks very much.

COUPER: Why don’t we hold any questions or comments for Reg. That means you have the burden of remembering what Reg said when you want to come up with a comment. But I want to be sure to give everybody a chance to speak first before we do this. And you will learn as we go along that we’re going to be all over the map. So, this will give you a flavor of the variety of issues that we covered. So, next up, is Bill Kalsbeek from the University of North Carolina. And Bill has actually got visual materials … contrary to instruction.

KALSBEEK: I’m going to take my prerogative and use my time to introduce an angle, if you will, that hasn’t really been a focus in this conference. There’s been a lot of discussion in the general design of surveys and talking about the utility of the automation process—of finding people, collecting data from people, moving data, analyzing data and so forth. Not a lot has been said, however, on the very front end of surveys. And I talk specifically about the process of constructing the list of households for sampling at the final stage. In area sampling, which is used in most major household surveys in the U.S., once one gets down to the final stage, the traditional approach has been in a lot of these surveys to identify a relatively local area—below a block group, typically—and then within that area to train and send into the field field workers whose job is to basically in a very systematic way go through the area that’s so designated and construct a list or a sampling frame of housing units for purposes of sampling and selecting the final sample of households. I’m from North Carolina and, in our state, what we’re attempting to do is to mount an effort to produce an annual ongoing longitudinal survey of households in our state. And, as usual in these economic hard times, we’re looking for ways to do this that are budget-friendly. And so there’s been a lot of thought in the last 18 months as to how we might do this in a way that makes maximum use of resources. And it’s covered all phases of doing a study. As everybody knows, doing a face-to-face survey is very expensive. A significant part of the budget for the field work in a household survey that uses this sort of manual field listing is that listing operation. So we began to think about ways that we might be able to save on that feature, that facet of the operation. And what we ended up doing is sort of an extension to what Sarah was talking about earlier, in the use of GIS technology. I have a very low-tech overhead that I wanted to share with you—it’s actually taken from a PowerPoint presentation.

I went through and began having some very fascinating conversations with a GIS shop on our campus at the Carolina Population Center,

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

and my colleagues there—Steve McGregor and Steve Walsh—have introduced me to some very intriguing possibilities. And the possibilities kind of built from the general notion which I have depicted here on this slide, actually a slide from one of Steve McGregor’s presentations. It depicts how the layering, and the interface of all this layering of different two-dimensional data, as well as more characteristic-type statistical data can be joined. And with all the different possibilities—some of which we heard in Sarah’s talk—we began to think of ways in which we might be able to accomplish this household field listing task utilizing GIS technology. An idea, as it turned out, basically was this: if we are able to use data from the Census TIGER files, and overlay that to information that is available through county property tax law offices, using property tax parcels, that the possibility might exist for sort of circumventing the need [to identify and canvass] an area (say a block group). [Instead, we could] construct a frame by interfacing or overlaying the property tax parcel—which is now rapidly being converted into a GIS format in local county property tax offices—and to utilize that interface to construct the list.

And, so, in our talks with the GIS people we developed a little plan. And what we ended up trying out—and I don’t have time to go through the results of the field test that we did—I’ll call it a modest field test of this idea. What we in fact did—on this very busy-looking map, a little bit too small for you to see in back—in essence what we have here is a green depiction of a block group and overlayed to that are actually the property tax parcels for Orange County, North Carolina. This is for a community, Carrboro, which is adjacent to Chapel Hill. Now the thing I wanted you to see in all of this is, first, the overlay of the property tax parcels to the TIGER maps, and then to also note the link to this of some data that was available from the property tax offices that provided information, in essence, on what they had—things like number of buildings on the property, what the property value was, the type of structure on the property, and so forth. Information that the property tax office on the county level might have. The idea here was, utilizing the GIS software that was available to us, we found that we could construct a frame, and use the block group—which as I was saying, one of the block groups on this map is basically there. We were able to point-and-click to construct a listing of parcels—not of individual housing units but of parcels. So the question then was … a number of questions emerged. Could this serve—the list of parcels, that is, for a selected area—could that serve as a plausible frame, or the basis for a frame, for household selection?

So what we ended up doing was developing a field experiment which we conducted in one of the counties adjacent to Chapel Hill looking at

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

a number of questions that we thought might be questions that needed to be solved before this idea could be implemented. Questions such as: once you select a sample of parcels, are you able to send someone and are they able to identify—are they able to readily identify—the housing units that link to the parcel? This linkage, in frame construction and sampling, is the essence of the enterprise and if it can’t be achieved this won’t work. So one of the things we wanted to find out was whether it worked, and what we found basically is that it did work. We constructed an experiment based on a design where we did each task to be tested independently, twice, and then we did a comparison, and we looked at the agreement, if you will, among the tasks and the comparison adjudicated the results to see how effective it was. We were able to locate them.

Another question that came into mind was: if you construct a frame like this, you’re going to have a number of these parcels which have nonresidential structures, parcels that are vacant, parcels that have businesses on them—are you able to, in any sense, delineate or isolate those that are more likely to have residential structures present? And again the answer was—and there’s some uncertainty in this due to the variability in the amount of information that the property tax offices collect—the answer is a qualified yes. And the reason there is that the information that we were able to link into from the property tax offices includes some things such as valuation of a building. And if the value is 0, then it’s a fairly safe assumption that there’s no building there. Or, in some instances, the number of buildings is actually recorded, and so you can identify a vacant lot. In some instances, and in the case of Orange County this was true, they provided a descriptor or classification of the parcel to identify whether it was residential or a public area or the like. So you can do some limited screening to narrow things down. So, again, to the question of whether you would be able to sort of isolate more residential-type parcels, the answer was a qualified yes.

There were a number of other questions, but just so I don’t completely run out of my time allotment … The other essential question we thought we needed to answer was: given that you have a sample of these parcels, and you send people to the field, will they be able to locate the correct person? That was probably the key question for us to answer and, again, our data from this limited field test suggests that the answer is yes. And how did they do that? Well, somebody said it earlier—in rural areas, we found that the primary utility was in the GPS device that we provided. In addition to the information on the property tax, these GIS files from the property tax provide things like the centroid of the parcel, so you can use a GPS to actually [place] it. And it was remarkably accurate in the instance where we tried it in the field. So, in the rural areas, the GPS was very useful. And in urban areas we found that

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

maps are more useful for that exercise.

So, that’s about all the time that I have, but I wanted to … I do have some things I want to say about the speakers as well, but I wanted to throw out into the discussion this other possibility dealing with the front end of the survey as a potential that could be utilized by technology.

COUPER: Thank you, Bill. As I warned at the beginning, this panel is really going to be all over the map, literally. Next, we’re going to get a perspective from the U.K.; Tony Manners is going to add his comments.

MANNERS: Yes, literally, I’m from another part of the map altogether. And my brief here is to try to give a picture from another country, where the assumptions we work under are a bit different. Just to give you an idea about ONS, the Office of National Statistics in the U.K., what scale it’s on so you can get a picture: we have 1,200 interviewers.We do, I think, 600,000 household interviews a year. One-in-a-quarter million adults interviewed, something of that order. We have about 20 projects during the year, of which about five are continuous surveys and the others are ad hoc surveys. And the variation that we have to deal with is … our Labour Force Survey, which is like your Current Population Survey, is a 40-minute interview. The clients can get, can change 10 percent of the content every quarter. A more stable one is the General Household Survey; that changes about 30 percent of the content each year. Our Omnibus Survey changes something like 80 percent of the content every month. So, with that kind of range, we’ve always been looking for automated testing. Well, I should say we’ve been in the CAI field for more than a decade; we’ve always been Blaise users.

Looking for automated testing systems … we’ve never really found anything that didn’t take more effort to set up than the risk justified. So we’ve tried to concentrate on the other end; we’ve tried to squeeze places where errors occur, concentrate on that end. And that relates to some of what we heard yesterday. I’m going to talk about integrating survey processes, re-using code, standardizing, and—as Reg said earlier—“keep it simple.” Those are the areas that we’ve concentrated on.

The thing about integrating survey processes—I won’t talk at length about this because I’ve talked about this here, before. We don’t have a problem of miscommunication between researchers and programmers because the researchers do the programming. Our researchers are people who negotiate with the clients, write the instruments, organize or manage the project during the fieldwork, and analyze and write the reports. So, they’ve got an overall picture.

We do a lot of standardization. One of the things that strikes me in the U.S. is that you seem to build case management systems around particular projects. I mean, we just have a case management system that projects slot into. Maybe I’ve misunderstood … We do a lot of stan-

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

dardization of code. We have a big initiative in government—across all departments in government—to standardize all of the basic classificatory information. And, because Blaise is such a modular language, you can easily produce modules which people—these researchers—can literally assemble into questionnaires.

One of the things, actually, that has been remarked on is that we talk about complexity all the time. Actually, our software has gotten better in the sense that it enables you do things more simply; you just have to sure that your aim is guided simply. And that’s been one of our uses of the better software is that we don’t do more; we actually try to do less.

Something else that I think is worth mentioning, because I haven’t heard much about it, is how important it is to look at the whole process and to build the output structures into your instrument. You build things up front, but you’ve got to be able to deliver data to people. And, in our experience, one of the areas where that potentially goes most wrong is between the output of the field instrument into whatever databases your clients are using. So we devote a lot of effort to persuading our clients of the virtues of simplicity. We had, on our Expenditure and Food Survey—which is a little like your Consumer Expenditure Survey, but it’s got nutrition as well—that we produced basically 99 tables out of the instrument. It was literally 99, and it was a painful process. We persuaded them to move to 3, as the natural structure of the instrument.

Something else we do … we write deliberately inefficient code. Code, that is, which is inefficient in programming terms but is very efficient in terms of the overall process. We do things so that the whole organization can understand the code that’s been written. For those who use Blaise, for example, we don’t use parameters—because that’s too much like real programming.

Like I said, we re-use code. We have a list of standard modules, and we also have—in our ad hoc surveys, something like 80 percent of the survey will be completely new. But the structures aren’t new. So what we do is have people use templates, which use the structures they want to use, and again assemble those.

Standards … our interviewers work across a range of surveys, as is the case with many of yours, and obviously they’ve got to be looking at the same kinds of screens all the time. So you have to have standards for things like that. But we also have standards for writing code and so on. We have a small group—one full-time equivalent, but it’s a number of people—called the Standards and Quality Assurance group, which kind of keeps a check on the fact that people are actually following the standards.

I’ve spoken about “keeping it simple.” We do that literally, in terms

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

of—if people want to anything different from the standards, from the norm, they’ve got to make a business case for it. You have to be very clear that something you add adds value.

Coming back to the things that we heard yesterday … yes, I think, Pat Doyle talked about enforcing specifications. Or getting agreement on specifications. We spend a lot of time on that, and trying to draw clients in so that the client understands the benefit that they will get from very clear specifications. So, it sounds obvious but it’s not always what it seems to be.

Something Bob Groves said … I’d just like to note that my organization has made very significant savings from moving to CAI. I think that that’s not normally the case, but … the reason we’ve done that is that that was our initial motivation. We had to save money, and so we designed them to save money.

Thomas McCabe—I very much like his index of unstructuredness, and I plan to use that.

With Harry Robinson, I agree very much with his idea about trapping as many bugs as the time and resources allow. That sounds like a real goal. Robert Smith, I agreed very much with. And a lot of what he said about prototyping, design restraint, modularity, transparency, and building quality in from the start. Larry Markosian, the same, in terms of iterative development and involvement of the stakeholders throughout—I think that’s a very important point. And with Mark, of course, I agree about coming back to Earth, and I liked how he drew attention to the sort of “fuzzy” things that we have to test.

So, I haven’t said a great deal about our testing process but I suspect it’s much like everyone else’s, with an awful lot of manual testing and people trying to do that in as intelligent and structured way as possible. But, as I say, we concentrated more on the places where errors might occur. So …

COUPER: Thanks, Tony. Next, we’ll hear from Susan Schechter from the Office of Management and Budget.

SCHECHTER: I’m going to spend just a few minutes talking about the OMB process, from an official perspective, and then I’m going to give you a couple of my opinions on how that integrates with survey automation. And then I’m going to mention briefly two initiatives that I think will continue to affect the submission of collections to OMB, from the agency perspective.

Most of you know, but I’m not sure that all of you know, that the OMB process basically requires every federal agency that wants to collect any amount of information from one or more persons in the public— whether a business or a school or a household respondent—must come to OMB to get permission to do so. And that’s called the clearance

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

process. The main objective of the agency is to get an OMB clearance number and an expiration date. And the expiration date typically is three years, which means that once you get OMB clearance approval you don’t have to come back to OMB for three years.

This process, for agencies, is agonizing. And I used to be in an agency, and I’ve worked that process, and I know how agonizing it is. Listening to Pat’s talk yesterday and Jesse Poore’s talk, probably to me the biggest problem is that OMB comes in right at the end of the process. And they have not been involved in the conceptual design of surveys, in the development of the questionnaire—typically. There are exceptions, but typically. Yet it’s OMB’s responsibility to review and approve the survey.

I want to mention to you—I don’t know how many of you know—of all the data collections that OMB approves during the year less than 5 percent are sample surveys. The vast majority are reporting forms. In fact, 80 percent of the entire burden that OMB approves during the year are IRS forms. So, we’re not talking just about a process that fits just the survey world; we’re really talking about a process that fits to a tax report form, a birth form, any form of administrative reporting, a federal loan application, etc. And all of this process has to fit within these different types of forms.

What ends up happening when the agency wants to finish its development of work—the agency does have to tell the public in a Federal Register notice that they’re getting ready to send something to OMB. And the public gets 60 days to comment on the agency’s notification of a proposed information collection. Then, after that 60 days closes, the agency comes again to OMB, this time with a Federal Register notice that says, “now, we really mean it and we really want to collect some information.” And you have 60 days to tell us if you have any objections or concerns about that. And after that 60 days, that’s when OMB is supposed to act.

In the past, paper-and-pencil forms were most common and in many cases they are still most common. But, we will talk about surveys for a second … OMB had a fair amount of discretion during those 60 days because the form is still sitting at a form design desk without being printed, because it didn’t have a clearance number and an expiration date. When surveys transitioned to automation, that’s where it changed. And it changed dramatically, and I’m not sure OMB still realizes just how much it has changed. Because the truth is that, by the time a survey comes to OMB, there is very little flexibility for OMB to change something in the instrument. I would say that in most cases agencies are tremendously resistant to any change in wording, in response categories, in question order, in something very minor where OMB—a policy official in OMB—may go, “why don’t you ask this question?” One ques-

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

tion, on something like the CPS or the SIPP, would cause probably a year in advance that the agency would need notification in order to ask that question. Because there’s this testing process that Pat illustrated; there’s the documentation process. There are all kinds of issues about field manuals and programming, and all the pieces of the puzzle that go into building a survey. So I would say that OMB does the best it can in its function to review instruments and data collection, but I would say that the focus has become less and less on the quality of the survey instrument and more on the utility of the survey, the design, the methodology. Yes?

MARKOSIAN: I didn’t understand, perhaps I missed at the beginning … what are the criteria that are used and why is your agency involved in this process?

SCHECHTER: Ah, yes … The Paperwork Reduction Act specifies that agencies should really try to stop burdening the public with all of this collection of information. And so if you have to burden the public, you have to tell OMB why you need this information. You have to demonstrate the practical utility of the information, you have to demonstrate that you’ve reduced the burden as much as possible. It’s sort of a way to manage what is perceived as this huge burden to the public. And in fact, a report just came out that said that it’s something like 7 billion hours a year that the public suffers from. But much of that, 80 percent of it, is IRS forms. So that’s why … [laughter] So it’s a way to try to reduce … agencies wholesale asking people for information.

So when Pat showed you yesterday, for example, the variation on the children’s health insurance program question, the truth is that OMB rarely sees all those different versions of the questions. They think they’re seeing the question that’s going to be asked—you don’t always see the question in all the different modes, you don’t always even understand all the different modes that are being asked. The agencies certainly have a responsibility to document their methods to OMB but a lot of that depends on the familiarity that the desk officer has with survey research. Some of the desk officers work much more with forms, not with surveys, with administrative kinds of data collection, and they are trained in policy areas, in economics, etc. They are not survey methodologists, per se, although some are. But most are not. So there’s a tremendous amount of variability in the desk officer role.

I would not say that OMB should move in a direction where we insist on more and more documentation from agencies. I wouldn’t like to really see that happen. On the other hand, I’m not sure what our responsibility is when we’re told that the survey may be completed on the Internet, for example. And that the agency may offer that as an option. Should the agency be sending us all the screens that the person

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

is going to see? Should the agency be sending us a test demo, or a link, where we would see that? There are some desk officers that ask for that, but I would not say that that is a routine kind of request that we ask for.

So my big, I guess, conclusion is that in terms of automation we are not consistently reviewing all of these submissions in the same way. There are some submissions that are reviewed much more carefully. The documentation that we see usually has—if it’s a CAPI or a CATI— usually has a coding that shows some skip patterns or some GOTO instructions, sometimes not. Sometimes you just get lists of questions. And we have to review the list of questions and assess whether that’s enough to approve it, or if we have to back to the agency and say, “could you give us more to help us understand the real content of this survey?”

I want to tell you about two initiatives that I do think will affect the agencies in coming years, one already passed. There’s an initiative called GPEA, which you may have heard of, it’s the Government Paperwork Elimination Act. And the essence of this act is that every data collection must offer the option to a respondent that they can reply in an automated fashion. It doesn’t mean you must get rid of paper-and-pencil surveys; it means that you must offer an option to report either on CD-ROM or on Internet or in some kind of electronic mode. Not all surveys have come into line with that, and I don’t think we have a lot of pressure yet to force it, although we are certainly expected to be asking the question: if this is a manual survey, when are you moving to an automated option? I do think that in the future that’s going to become a bigger issue. And I think that there will the difficulty for some surveys to do that. For example, there is just a resource constraint. I’m not sure, for example, what the current status of the National Crime Victimization Survey is, but the last time we reviewed it it was still paper-and-pencil because they didn’t have the money to pay for the conversion to CAPI. So if OMB is not going to provide that money, and yet there’s a legislative requirement to offer an automated reporting option, I guess at some point there’s going to have to be a meeting of those minds.

I’ve been told that a CAPI interview will qualify as an automated reporting from a respondent. But my guess is that agencies will have to demonstrate that an interview is so complicated and so complex that you have to have an interviewer’s help. They can’t just tell them to do it on a telephone data entry or you just can’t give them the Internet option. I don’t know how that is all going to play out.

The other interesting thing that’s happened this year—and I’m not sure that we’re going to really see the effects of it for awhile—is that there’s been a Data Quality Initiative that we put onto the OMB appropriation language last year, which ended up requiring OMB to put out data quality guidance and requiring that agencies put out data quality

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

guidance. And I have to step back from that from a minute, and I will come back to it. When OMB approves a survey, OMB never sees the survey again. We approve a survey and then the agency says that this is the questionnaire we’r going to use, this is the methodology that we’re going to use, and this is the response rate we think we’re going to get. We don’t say, “now, come back and tell me next year how you did. And give me a copy of the report showing the data.” Sometimes we do, if we have personal interest, but generally we do not. And unless the agency substantially changes the content or the sample design—-something really substantially changes—they don’t have to come back to us. So we only approve up front. The Data Quality Initiative only cares about the end—the end analytic result—and whether or not the findings are based on high quality data. Now, of course, it does really care about the process and the development of surveys and the data collection. But I think what’s going to happen is that agencies are going to find themselves doing more and more documenting, more and more publishing what their technical notes are, what their concerns are, what their issues are about the data quality. I think that archiving will end up becoming more of an issue, because people are going to be able to go back and say, “Well, you based some rule on a study that was done that was unpublished—I’d like to see some of that data. I’d like to be able to understand how you came to the conclusions that you came to.”

So I do think that, between this Government Paperwork Elimination Act and the Data Quality Initiative, it will impact the OMB process— I’m just not sure how it will. So I’ll leave you with those thoughts for today …

COUPER: Thank you, Susan.

Well, I just sat there—and we’re running out of time—so just to segue into the open discussion, let me make a couple of remarks. I know Harry from Microsoft is here, and some of the computer scientists, so I promise not to say anything nasty about Microsoft. Which will be difficult for me.

But just to kind of caper into the lion cage for a moment and then come back to what this discussion has been about over the past day and a half—what can we do to get together? What can we as survey researchers learn from the computer scientists? And you’ve heard a lot of different perspectives about things that we can do.

I would just make one assertion, just to get the discussion going, which is that the burden is on us. It’s going to be hard for computer scientists out there to say, “Gee, this is a fascinating problem. I want to devote my life, devote my resources, to that.” The burden needs to be on us, and I think that we do ourselves a disservice if we keep on doing this hand-wringing about how unique and how complex our problems

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

are. And this isn’t specifically at you, Pat—we all do it. If our problems are so unique, how are we ever going to get people in other disciplines interested enough in our problems, to devote time and energy to finding out what those problems are? So, and I think that on that point we’ve heard several suggestions and several comments. One of the ideas that came up, particularly yesterday, are things like the complexity graph, things like model-based testing, things like using tools that are out there, using ideas that are out there. And we can take those ideas and apply them to our subjects—in small incremental steps. I think we also fail ourselves sometimes when we think of everything as a massively complex task, and it becomes unmanageable. “We can’t handle it because it’s too big,” and then we stop. So what I think we should do is, many of the problems and many of solutions I heard yesterday are really on the order of one smart graduate student working over the summer to demonstrate how these kinds of tools could be applied to a module of the survey— enough to get people excited about the idea.

The other final comment I would make, and then I promise I will turn it over to the discussion, is that one of the things I learned yesterday is that—despite being extremely complex, as the big surveys are—they’re not as complex in the sense that we can analyze them. People showed that we can analyze them, we can draw graphs, we can generate graphs from these systems. So they might be complex but they’re understandable. The IDOC system showed that we can extract systematic information from them, to produce documentation. And if we think of survey systems, automated survey systems, in that way, they are certainly complex but they can be made less complex and more manageable. There are some systematic aspects of them that can be made amenable to writing scripts that will parse all the information and produce diagnostic tools. So I think we can make progress.

I’m heartened, from what I saw. A lot of ideas over the last day and a half. I think that there’s a lot we can do to improve the process and I think we ought to be taking the lead, the next step, to do that.

So, with that, what I’m going to do is kind of moderate and manage the discussion. And we can slowly segue into lunch after a few minutes. So this is now open to the floor to add comments, questions, ultimatums, or whatever you would like.

ONETO: I’ll start with a question for Susan. I was intrigued about the comment you made about OMB getting into the scrutinizing of data quality. I just … I mean, you could have a survey that is just beautifully documented and that has very modest respondent burden, that doesn’t duplicate the information that any other survey collected, and it may not have the data quality. So the question is: who is going to determine

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

what “data quality” is? And how will OMB be judging something like that?

SCHECHTER: Right … It’s an excellent question; it’s something that has us all worried. I would say that the statistical agencies themselves are probably going to come up with a common framework for how they’re going to define and look at data quality. The essence of this particular statutory initiative was that rules that are passed that are very costly to implement are often based on studies that EPA may do or other agencies, that HHS may do, and then based on the results of rules they implement a legislative initiative that might require industry to spend a fair amount of money to do something. There were a lot of complaints in the last administration—remember the ergonomics, big snafu, that we got this ergonomics rule finally passed and the new administration came in and pulled it back? And part of it was that it was based on data that was not of high enough quality to justify that rule. I’m not exactly sure what the outcome is going to be for survey data. The way that the guidance is written is that if the data are “influential” they have a higher standard. And by “influential,” it’s defined as being used for policy or decision making. Well, if you think about most of the data that comes out of the Census Bureau, most of the data that comes out of the Bureau of Economic Analysis, out of BLS, etc., the data are used for decision making. There are some who think that this will end up in the courts, and that the courts will eventually get involved in some disputes about how high was the quality of the data for use in decision making. I’m not sure where it’s going to go. I think, personally, that it’s going to become a new area that some people will work a lot of hours in.

DOYLE: Most people might predict that the issue I’m going to raise is one of documentation … [laughter] And I’m hoping that my documentation folks will back me up here. I am encouraged by the progress that we’ve made in documenting instruments. I am not quite as optimistic as you are that we’ve really solved the problem yet because neither of the two tools that we have are really ready for prime-time today. But we have needs today that need to be filled. But my bigger concern is that, with automation, because we lost the free good of the paper instrument, we need a substitute. We don’t have an automated substitute, and we’re never going to get an automated substitute that can provide the context for each question, why we ask each question, etc., etc. So I think we need a change in behavior amongst our survey methodologists, amongst our survey managers, amongst the managers in the organizations, about— and it applies to testing as well. It’s a new behavior that we need to adopt so we can continue to provide the same information that we’ve always provided. So what I want to do is to hear from those of you in the education business, particularly those of you involved in educat-

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

ing our survey methodologists—our future managers—will you consider adapting your curriculum to address some of these issues? They’re not theoretical, survey methods issues, but they’re very practical management and design issues. It gets to the issues that Tom [Piazza] is talking about, about choosing a simpler approach. It may appear to be precise but it’s not so precise, it should be kept simpler. But I think that the only way we can make a wholesale change in all of this is to educate our up-and-coming folks to move in that direction. So I’m wanting you guys to be enthusiastic about this and to take this task on, to expand your teachings.

PARTICIPANT: Any education people want to take that up?

PARTICIPANT: She glared at me when she said that … I think it’s not only the matter of capturing the documentation. I think one of the worst problems we’ve always had working with surveys, or any old data set, is that the documentation not only is incomplete but it hasn’t been structured. And I think—Tony Manners said that we need to think in terms of developing surveys within a structured system that captures those decision points, that captures the conceptual points as well as the survey development. That not only captures the information but does it in a structured way so that, as information is turned in to OMB, that they know what they have to look at. Within such a structure, you could begin to define some of those complexity, those tests for complexity. You want a structured system, because that’s when you can test computer programs and their structure and know what to expect. When your documentation isn’t structured in a uniform way, [that’s a difficulty]. So, I think it does get back to the documentation; if it isn’t captured, captured completely and in a structured manner so that you can go through the process and to the data that comes out at the end and go back and say—when doing analysis down the road—how you got to this point, I think that’s what needs to be worked into this whole process. And I think that documentation and testing has traditionally been something done after the fact, and it needs to be thought of [all along].

TUCKER: I want to go back to something Mick said … is this really as complex as we make it out to be? I don’t think that the survey process itself is that complex; I think that what’s complex is the organization built around it. And I think we may … it’s become so complex because we’re not organized to do it well. And, so I asked the question yesterday—I think it’s a people problem, I don’t think it’s a technology problem. I think, ultimately, we’ll have the technology—the question is whether we’ll know how to use it.

COUPER: Let me respond to that, and I think Reg touched on it in his remarks…. It’s about processes and about people, and it touches on the documentation, about skill base and knowledge, about being able to

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

use the tools that are available. So I do think that it has to do a lot with structure and management and those sorts of things. So I agree with you. Reg, you want to jump in on that?

BAKER: Well, let me just say this, and it’s not going to be very popular. I think there’s … I’ll just spit it out. [laughter] There is a tendency, I think, in the federal statistical establishment to let standards run amok. And there are lots of examples of that. You know, the one I talked about a while back at CASIC has to do with security at BLS. Well-meaning folks who managed to pretty much kill any chance the CES had to be deployed on the Web in any kind of planned fashion because of insisting that people download digital certificates. Now, who’s not in favor of security and confidentiality of data? We all are. But you still have to be practical about it. I think a lot of the things you heard Pat talk about yesterday are the sorts of things … if you have a very careful, conscientious reading of the methods literature as you’re designing questionnaires it’s very easy to fall into this trap of wanting to do all of these things which add a lot of complexity, which … you know, methods studies single out…. I was listening to Roger [Tourangeau] this morning and thought, “Damn it, this is just going to make things more complicated, studies like this.” So I think there’s that tendency …

TOURANGEAU: I’ll go sit outside … [laughter]

BAKER: But I think that there is that tendency, and the part that’s probably not popular to say is that it’s because a lot of this evolves in a not-competitive environment where the need to be leaner, to be faster, simply doesn’t exist. And so we can in fact spend a lot of time on standards, and insisting that standards be followed in a certain way, that maybe someone in the private sector doesn’t have the luxury, and they need to look at those things from a practical perspective and make hard choices.

KALSBEEK: I’d like to comment on the issue of simplicity. I’m for simplicity; I’d like to go on record to say that …

BAKER: Everyone always is … [laughter]

KALSBEEK: … The difficulty is, my field is public health. I work with researchers who ask difficult questions about important, difficult societal problems such as abuse in the home. So, if I’m working with that researcher and my job is to help that person to extract information from women who may be exposed to abuse of their children or abuse of their spouse, I’m touching on some very difficult ground. And it may be, and there’s plenty of evidence to suggest, that if I’m that respondent I won’t be fully forthright if you ask me the question in a conventional simplistic way. And I may have to find some more complex alternative such as preserving confidentiality through use of technology to remove identifying information—some states have legal implications in that re-

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

gard, in terms of recording instances of abuse—and an interviewer can legally come into harm’s way if there isn’t some provision made for sort of developing a gap or a fail-safe so that person won’t be subject to litigation. The conventional simplistic solution won’t solve that problem, so I may have to build the capacity in collecting information which will complicate matters.

My point here is that to extract and to produce good data—as OMB wants us to do—in some instances, we have to use extraordinary means, and that interjects complexity, and the complexity interjects complexity with the technology, to get the job done.

PARTICIPANT: [inaudible until microphone is obtained] In the academic world of software engineering and computer science, there’s very little emphasis on things like maintenance, support, and documentation. And everybody who has gone from teaching computer science to being VP of engineering or something—which is my route—you discover this and you kind of look back and say, “Gee, I wish I had had a course that was teaching about documentation.” And I don’t know how to get that back into play; it’s a kind of general problem that we seem to have a mismatch on that, and we see it there, too.

PARTICIPANT: I think that the idea is to produce usable data, and “usable” is the thing that I get to, and I see it in a slightly different way from the archival community. And this relates to some things we’ve heard today. There are some standard ways of producing something describing what you’ve done; there are emerging standards, there are standards that are already in existence. I see another area, and that’s in the area of cognition. And one of the steps we need to understand something that is relatively complex … This happened when working with Pat, when the SIPP data first came out, it was a very complex under-taking with a new analytical technique. But we also had to get together and say, well, what do people need in order to figure out how to use this data? One of the things that I was told by the conference organizers that we would come up with is that in terms of Web searching and such it is necessary to sit down and think about what are the products and what do people need—not only from the organizational level and the files, but also from the cognitive level—so that we understand what is going on in the survey. [inaudible]

COUPER: Well, I think that we might be nearing the end … I see you all eyeing the lunch boxes. [laughter] And I can understand how exciting lunch might look now. So Dan’s going to tell us how to do this.

CORK: There’s not much more to say … lunch is deliberately informal today. We’d like you to stick around to chat, to converse. The formal part is all done on the workshop. I hope you enjoy the lunch. Thanks to all of you for coming, and thanks again to all of our speakers.

Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
This page in the original is blank.
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 226
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 227
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 228
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 229
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 230
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 231
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 232
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 233
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 234
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 235
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 236
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 237
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 238
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 239
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 240
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 241
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 242
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 243
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 244
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 245
Suggested Citation:"Panel Discussion: How Can Computer Science and Survey Methodology Best Interact in the Future." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 246
Next: Workshop Information »
Survey Automation: Report and Workshop Proceedings Get This Book
×
 Survey Automation: Report and Workshop Proceedings
Buy Paperback | $80.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

For over 100 years, the evolution of modern survey methodology—using the theory of representative sampling to make interferences from a part of the population to the whole—has been paralleled by a drive toward automation, harnessing technology and computerization to make parts of the survey process easier, faster, and better. The availability of portable computers in the late 1980s ushered in computer-assisted personal interviewing (CAPI), in which interviewers administer a survey instrument to respondents using a computerized version of the questionnaire on a portable laptop computer. Computer assisted interviewing (CAI) methods have proven to be extremely useful and beneficial in survey administration. However, the practical problems encountered in documentation and testing CAI instruments suggest that this is an opportune time to reexamine not only the process of developing CAI instruments but also the future directions of survey automation writ large.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!