Bailey (2011) discussed Nielsen Life360 Program, which uses a “digital ethnography” approach to measure attitudes, preferences, and behaviors of the targeted population using mobile phone surveys, photography, Internet-based journals, video cameras, and Web surveys. The specially equipped smart phone prompts respondents to complete a short survey on an hourly basis in addition to capturing an image using the built-in camera as a picture description of their surroundings and activities in real time.
Summary of the Workshop
The panel found the workshop presentations to be highly informative and to provide important input into panel deliberations. A number of key points emerged from the prepared remarks that discussants delivered during the workshop, as amplified in subsequent discussion among panelists.
First, the international comparisons demonstrate that concerns about data quality and burden that have led to the need for a redesign of the CE are not unique to U.S. data collection efforts, although the size of and variability among the U.S. population present particular challenges. The alternate methods that the panel observed from other countries made clear that a bounding interview is not a universal method, and that it is plausible to rethink this aspect of CE administration. It was also clear to the panel that, although the methods and approaches from other nations have many strengths, they also have their own challenges, and simple wholesale adoption of those methods is unlikely to be a panacea for improving the CE.
Second, adding new modes of data collection needs to be done thoughtfully, attending carefully to whether adding new modes or providing respondents with a choice of mode increases data quality, reduces respondent burden, or reduces nonresponse sufficiently to be worth the design, operational, and analytic costs. As the Smyth presentation in session 2 illustrated, the scientific community is not yet at a point to fully understand why particular modes work for different respondents.
Third, while it is quite attractive to consider replacing or supplementing respondent-reported data with data from other sources (administrative records, data from other surveys) to reduce respondent burden and administrative costs, this is not as straightforward an enterprise as it might seem. The hurdles are notable enough—from mode and questionnaire differences, to sampling and weighting incompatibilities, privacy and confidentiality issues, linkage difficulties, increased agency efforts, data sharing difficulties, and lack of knowledge of costs—that it does not seem plausible to the panel that alternate sources could suffice in the short term. There is also considerable concern about whether external data would be consistently available over time.
Fourth, the panel was impressed by efforts in other U.S. surveys to