Position Papers Submitted by
There was a problem loading page 132.
Research On Information
Graduate School and University Center, City University of New York
A few comments about past impact research are appropriate at the outset, because there are some important issues that ought to be considered before moving on to new topics and research priorities. We made some intellectual mistakes in the past that we can avoid repeating in the future. Since some of the workshop participants are new to this area of study, I think it would be helpful to lay out some of these issues.
Because technological change in information technology applications has been so rapid during the last 25 years, there has been a constant temptation to turn away from studies of current outcomes of existing information technologies and instead turn toward a kind of futurology or speculative stance about what might be the case in the future. Examples are found in the agenda for this workshop, where topics are articulated such as: "How will the nature of the employment contract change? … What will be the impact on K-12?" The future is important, and these kinds of questions are valid, but this stance has had some unfortunate implications for impact research in the past. Among these are the following:
• A tendency to support the development of theoretical models to predict what will be or might be the case, rather than pursue empirical studies of what actually is happening now. Since theorizing tends to be cheaper than data collection, this has tended to skew funding toward the former, and has often given this field a rather speculative feel. But speculation, even by very smart people, has often been far off the mark.
• A tendency to fund studies of "cutting-edge" applications, which tend to be located in large, dynamic (and resource-rich) firms, or superior schools, rather than looking at the kind of "ordinary" IT that is in place in average workplaces and ordinary schoolsthe point being that what one observes in the largest, most resource-rich, and most committed settings is not a good predictor of the typical effects of a technology in the larger world. (In the field of program evaluation there is a parallel phenomenon known as the demonstration effect: innovative programs are shown to work well initially in well-supported demonstration projects but then prove much less effective when widely implemented in more
ordinary settings.) Studying cutting-edge applications in cutting-edge firms, schools, or universities is fascinating, but this is not the most rigorous approach to understanding how technological change is affecting the larger population of organizations and people. It tends to result in unrealistic scenarios.
• A tendency to direct money into prototyping new applications, and to rely on the authors of the prototype to do the impact assessment or performance evaluation themselves or not do an evaluation at all. This has occurred all over the "computer assisted cooperative work" field, as well as in the field of educational software. It is naive to expect that people who toil over developing new technology can provide an objective assessment of its performance, yet this approach dominates research. There is no equivalent in IT research of the "double blind" study, and replication is rare. Perhaps this is one arena in which engineers and social scientists could collaborate: if federal funders insisted that all prototype and development projects include an arm's-length performance assessment, this would be a major step forward.
• A tendency to discount findings that demonstrate negative or null impacts of IT as being intellectually uninteresting on the grounds that such impacts simply reflect early versions, or start-up problems, which will disappear when the next generation of machinery or software comes online. Studies indicate that there are large discontinuance/abandonment/non-use rates for important and much-hyped IT products. (Examples are Kemerer's recent studies of abandonment of fourth-generation software development tools and Detroit's ripping out advanced automated manufacturing from some plants.) Users of "what if" decision-tools have been found to use them mechanistically even when shown that they are producing inferior decisions. Computer searches of databases using current methods have been shown to generate large numbers of bad hits, and also to miss large numbers of relevant items.
If IT impact research were a normal social science discipline, such striking findings would be viewed as important scientific puzzles, unleashing a stream of follow-up research seeking insights into human-computer interactions implied by these failures. By and large this has not occurred, because of a widespread mentality that says that any problems with IT that impact studies unearth are simply minor implementation issues and will be overcome by the next generation of technology. This mentality reminds me of the anthropologist Evans-Pritchard's studies of Azande witchcraft. Whenever he contrived to show African believers in witchcraft that casting a spell/curse on someone did not work in a concrete case, the believers were unshaken, retorting that of course witchcraft worked, but that the spell had been performed poorly in this case. IT does work, but impact research should spend much more time looking at the many settings in which it works very differently than intended, and should mine these cases, as well as the successes, in order to understand the full picture.
• One of the most common findings in prior IT impact studies has been that outcomes are far from uniform across all kinds of settings and contexts. In earlier years we looked for the impact of IT on (say) organizational centralization, and scholars tended to hew to one end or the other of a bipolar spectrum: centralization versus decentralization; upskilling or deskilling; job destroying versus job creating. What scholars found, in almost every case, was that this was an unproductive way to conceptualize the issue. One almost always found evidence of both extremes of outcomes/impacts as well as many points in between (see Attewell and Rule, 1989). We finally realized that we were asking the wrong question. We should have asked, In what contexts does outcome A typically predominate, and in what contexts does outcome B tend to prevail, and when does one see A and B in equal measure? We found that a technology does not usually have an impact. The context or setting in which the same technology is used often produces strikingly different "impacts." This phenomenon has been discussed in terms of "Web Models" (Kling), or "structural contingency theory" (Attewell), or Robey's ''Plus Ca Change" model. All imply that we fully appreciate the role of context in technology outcomes, and that we therefore expend sufficient research effort to measure the context, and to delineate its interactions with the technology. If we fail to do this, we return to the old "black box" paradigm, that is, attempting to measure only the input (say, a particular software program) and the outcome (say, kids' test scores) without bothering with the context (the classroom, the kids' family backgrounds) or the causal mechanisms. Black box research on impacts often discovered "inconsistent" outcomes across studies but proved unable to show why there was so much variation, because it neglected to measure the contextual variables that were moderating the effects of the input upon the output. For example, the old paradigm would phrase a research question so as to ask whether or not home PCs would improve kids' school performance. In contrast, research within the current contextual paradigm would ask under what conditions having PCs at home affects students' school outcomes. A piece of my own work has indicated, for example, that having a home PC currently has a minimal effect on the school performance scores of poor and minority kids but is associated with substantial positive effects on the school performance of kids with high socioeconomic status (SES), when other factors are controlled for (Attewell and Battle, 1997). Race and class/SES, in this example, prove to be very important contextual features moderating the impact of home PCs on school performance.
• Workshop organizers should be aware that because of the last three decades of research and the importance of context as discussed above, many distinguish ed scholars of technology avoid the term "technology impact." Using this term in framing the question would be viewed by some of them as indicating an ignorance of the body of scholarship in technology studies. For them, the term "impact" connotes a kind of technological determinism that is very dated and
widely discredited. Personally, I am not so averse to the term "impact," but I do agree with their larger point about avoiding models based on simple technological determinism.
Distilling these arguments into positive recommendations: (1) future research should pursue empirical studies of existing technologies in real settings, as distinct from speculative or purely theoretical exercises; (2) care should be taken to include representative organizations/settings, not just cutting-edge or high-tech ones; (3) studies of unintended consequences of IT, such as failures and discontinuance, are important for what they tell us about these technologies and about the process of change more generally. Researchers should be interested in the full range of "impacts"intended and unintended; (4) projects aimed at developing technology prototypes should routinely include a performance assessment or evaluation, and the latter should be conducted at arm's length from the former; (5) contextual variables should be studied rigorously, and their moderating effects on technology outcomes should be a major part of inquiry; (6) we should reconceptualize what we are doing as social and economic studies of computing and communications technologies rather than technology impact studies, and try to avoid technological determinism.
To move to the request about specific areas for research, here are some suggestions:
1. The "productivity paradox," in my opinion, remains an important and unresolved issue. However, I suggest that we should move beyond dichotomous thinking (Does information technology have a payoff, or not?) and ask, In what areas/applications/settings do we see payoffs and in what areas don't we, and why? What mechanisms can be identified that attenuate potential payoffs, and how do we measure them? What interactions and contexts explain variation in productivity outcomes?
2. Skills. There is anecdotal evidence that the range in performance levels in computer-related work is greater than that found in noncomputerized tasks. In other words, the gap between skilled and mediocre users is larger in computer-related work. This suggests that skills in computer work are less well diffused, or are shared less than in other kids of tasks. We need research on what constitutes skilled versus unskilled performance in computer work of various kinds, and a better understanding about why so many of us make mediocre use of these tools.
3. Teenagers. I suspect that personal computers are changing the lives of teenagers more than most other age cohorts, and that is both an opportunity and a concern. Computerized communication affords powerful opportunities for social affiliation (e.g., Sproull and Faraj, 1995) and for playing with identity, both preoccupations of adolescence. There have already been studies that suggest that teenagers are spending less time watching TV and more on the Web. There are a
host of policy issues surrounding their use. But our knowledge base of how young people are using the Web, and what they are getting out of it, is too sparse.
4. Education. As a researcher I find the literature on educational computing quite maddening. There are exciting claims of accelerated learning using computerized tools, but the research rarely gets replicated, and even the most lauded programs (e.g., the algebra tutor at Carnegie Mellon) never seem to cross into public use, in part because these prototypes are built on UNIX platforms in esoteric languages. As a result the field does not progress in a cumulative manner. There is clearly room for a serious review and analysis of the state of the art in educational software, and for research on the barriers to future progress of IT in education. Universal access to the Web is the only area I know that has received systematic treatment.
What If All Information Were
Available To All?
Department of Economics, University of California, Berkeley
Rapid improvements in information technology raise two grand issues. First, are we moving toward a world in which, to a reasonable approximation, all "information" (not, of course, the same as knowledge) is readily available to all, or are there major obstacles in the way that may prevent us from getting to that point? For instance, is there no such thing as "all information" relevant to a particular topic? Are standards problems, intellectual property rights problems, database search limitations, or other issues likely to bound us well away from that "all information available" state?
Second, if we do get to that state, what will it be like? Much of today's employment consists of clumsily dealing with information. Will the demands of more information be greater or less? If the problem gets "solved" rather than just increasingly addressed, what are the other main things that need to be done in an advanced societyin other words, what will today's information manipulators do instead?
Critical Issues Relating To Impacts
Information Technology: Areas For Future
Research And Discussion
Santa Clara University
There are several key issues that concern me as a scholar. First, as an economic historian and as someone who looks retrospectively as well as prospectively, I believe we face a major issue involving the archiving of data. There are two main issues. People say about magnetic media that it lasts 5 years or until it wears out, whichever comes first. That is probably a bit pessimistic. But even if the media persist, what about the input-output devices? It is getting more and more difficult to find a 5.25-inch drive, and woe to him or her who has data on 8-inch floppies! Tape backups are sometimes even worse. New backup software is sometimes not backwards compatible, so that one needs old copies of backup software as well as a compatible tape drive in order to restore data. We need mechanisms to ensure the retrievability of records that previously would have been stored as printed records. This issue is at least as important for individual records (both personal and professional) as for those pertaining to the corporation or organization as a separate legal entity. Whatever media are used, we need them to be at least as durable and stable as microfilm. Ideally these media should be relatively inexpensive, and equipment to read and/or write on them should be standardized and widely accessible. Will individual and private enterprise be sufficient to ensure retrievability? Is there an externality in terms of ensuring access that would warrant government subsidy or intervention in this area, perhaps as part of the activity of the National Institute of Standards and Technology?
A second issue: As a scholar I look forward to tremendous opportunities in terms of the archiving of old journal runs. This has an enormous potential capital savings impact (consider the linear feet of bookshelves in faculty offices that might be liberated). I look forward to being able to access 100-year runs of journals such as the American Economic Review, from CD-ROM or over a network through software such as Adobe Acrobat (thus text is searchable). I see this as less important for books and monographs (where being able to read through an entire volume, which presumably has some coherence) will still be desirable. Nevertheless, the ability to search the text of scholarly monographs would be useful. The cost of converting newly published material to this form will be small, since most of it now exists in machine readable form before it goes to be typeset. The real challenge will be older works. There is a potential for enormous
efficiencies here in terms of research libraries and scholarly research. But who will pay? Is there a role for the Library of Congress? Can we get to the point that interlibrary loan involves the simple downloading of a large file? Will scholars assemble libraries of CD-ROMs attached to personal computers? Will they invest in juke boxes so that the disks are available and retrievable when needed? (CD-ROMs can be as inconvenient as the computers were prior to the advent of hard disksone can never seem to find the disks when one needs them, and their smaller size renders them more vulnerable to misplacement than books.) Or will the material be available through servers in libraries or over commercial networks? Obviously, copyright issues are relevant for recently published works, but I am interested in materials for which copyright is no longer relevant. How will this affect the publishing business?
Finally, let me comment on ways in which new instructional technologies will affect the craft of teaching. I believe firmly that advances in information technology will play an important role in complementing rather than eliminating traditional classroom instruction. The advent of television and the video tape recorder were both heralded as sounding the death knell of traditional instruction. There is no evidence that this has occurred, nor that recent advances will have this effect either, any more than computers have eliminated the use of paper or videoconferencing facilities have spelled the demise of the 747. The effective instructor acts in a complex mixture of roles. In one role the instructor is a supplier of services to students (particularly when they are enrolled in course work beyond the age of compulsory schooling laws). In terms of this relationship students are in a real sense customers. But the effective instructor occupies another role as wellas, in a sense, a supervisor of students, and plays a role in motivating, encouraging, evaluating, and developing students that is totally foreign to the service provider-customer model. For any topic there will always be a small percentage of prospective students with the necessary background, motivation, and self-discipline to learn from self-paced workbooks or computer-assisted instruction. For the majority of students, however, the presence of a live instructor, will, in my view, continue to be far more effective than a computer-assisted counterpart in facilitating positive educational outcomes, just as for most work relationships, a live supervisor is going to be more effective than a computer replacement.
The most important impact of information technology will likely occur in increasing the productivity of the hours students spend outside of the classroom. Several years ago many universities, including my own, built computer classrooms with networked computers for every one or two students. While these have proved effective for training in the use of various kinds of software, in most cases they proved disastrous for standard classroom instruction. The computers created line-of-sight obstacles between the instructor and students, and students could sometimes not resist the temptation to play computer games during class time. In some instances such labs have been ripped out. Nor am I persuaded that
the increasing use of presentation software on average improves the efficacy of classroom communication. The dimming of lights and the focusing of attention on an overhead screen distracts attention away from the facial expression and body language of an instructor, which gives away two of the most powerful benefits of live instruction. Expensive overhead cameras that convert documents to a video feed currently have lower resolution than standard overhead projectors.
The greatest potential for new information technology lies in improving the productivity of time spent outside the classroom. The norm of accrediting agencies is 2 hours' outside work for 1 hour in class. Making syllabi, solutions to problem sets, and, where copyright law will permit, assigned reading materials available on an inter- or intranet offers tremendous convenience. E-mail and more sophisticated groupware vastly simplify communication between students and faculty and among students who may be engaged in group projects and face enormous logistical challenges in setting up group meeting times.
Department of Sociology, University of California, Berkeley
Although not currently studying computing and communications, I nonetheless have several observations to make based, in some measure, on my past studies in the social history of technology. Most generally, I suggest caution. The major risk is to be carried away by the exaggerated claims about the consequences of computer-mediated communications (CMC). It is especially a risk for those of us who both heavily use and are fascinated by CMC. We should keep in mind that big devices can have small effects; that the effects of a technology can be contradictory and even self-canceling; that the extent of diffusion does not necessarily demonstrate social significance (cf. the VCR and the ATM, both nearly universal, with the cotton gin and the "pill"); and that the effects of a technology can be substantial but only in a specific section of society, such as the white-collar workplace.
Thinking about these issues requires focus and distinctions. One set of distinctions concerns the subject. "Information technology" is too large a field, including, among other items, television and photocopying. Clearly the interest is in the consequences of CMC, particularly e-mail and the Internet. Is that all? On the effects side, it would help to distinguish at least three contexts: (1) commerce and the workplace; (2) public institutions, such as government and schools; and (3) private life, including families and other social networks. (Some might suggest an additional realm: the psychological.) Not only are the dynamics likely to be different in each sphere, but so also are the quantity and nature of the available data. It is much easier to find out how marketing and job creation are affected by CMC than it is to find out how kin ties or neighborhood dynamics are affected. My own focus is on the third context.
What do we know about the social consequencesinstitutional or privateof CMC? My impression is: not much. We have some crude estimates of computer diffusion in U.S. households, by key demographic categories, but we know little about who uses CMC and for what, and probably know nothing about the implications of that CMC use. (Even our knowledge of who uses the household telephone, why, and with what end, is crude.) Perhaps some of that basic information is available in proprietary sourcesand perhaps those sources could be opened to researchers. Much of the published research that I have seen tends to be small scale, focused on select groups, and often of marginal quality. In any
case, we seem to have little that explains the who, what, when, where, why, and how of domestic CMC use. And even answers to these questions, as I cautioned above, may not tell us the answer to the key questionSo what?
What do we need to know about (in the private sphere)? And how would we find out? We need to know more accurately and in greater detail who uses CMCs for nonwork purposes, how often, with whom or what, to what ends, and why; and conversely, who does not. Beyond those basics, several bigger questions have been raised, for example:
• Does use of CMC significantly affect use of other media? What is "displaced" by CMC? (Is, for instance, entertainment or social interaction displaced?) More generally, does CMC use significantly affect time budgets?
• Does CMC use significantly affect spatial activities? Does it, for instance, replace some number of trips? If so, which sorts of trips?
• Does CMC use significantly affect personal social networks? Are some social relationships developed? Some reinforced? Some ignored? Some dropped? (E.g., do CMC users shift some attention from, say, family, to distant friends?)
These types of questions might be answered with high-quality surveys and intensive ethnographies of individuals. Ideally, one might even design field experiments on some of these topics.
While important, the answers to these questions do not necessarily tell us what the aggregate, social effects of CMC are. Understanding these effects involves broader-scale and more difficult questions, such as the following:
• Does the diffusion of CMC significantly affect the spatial pattern of towns? (Are we ever going to get the dispersed world of telecommuting, first predicted in 1893?)
• Does the diffusion of CMC significantly affect subcultural segmentation? Does it contribute significantly to the formation and sustenance of specialized "social worlds," marked, for example, by "niche" magazines?
• Does the diffusion of CMC significantly affect political mobilization?
These types of questions cannot be easily answered by simply looking at individual use; these concern macro- or aggregate effects. Here, we need complex longitudinal and/or comparative studies of institutions or communities or even nations as CMC diffuses within them. (There are a couple of examples in the study of television.)
When journalists ask me, because I wrote a book on the early social history of the telephone, to comment on the effects of the Internet and such, I usually demur. Ask me after the dust settles, I often reply; it's too soon to tell now. But the policy challenge is to estimate where this CMC "football" is going to land even while it is still bouncing around on the field.
Impacts Of Information
Behaviors And Metrics
Corporation for National Research Initiatives
NOTE: The opinions and views expressed herein are those of the author and do not reflect those of the Corporation for National Research Initiatives.
Barriers to the diffusion of information technologies and their commercial application are many: resolution of ambiguities in intellectual property and the relative importance of patent, copyright, and contract law; development of a viable financial model or models; and appropriate contexts for deploying technologies that provide security, afford adequate protection of personal privacy, and offer reasonable protection of free speech. Disputes and controversies that surround these issues make assumptions, usually silently, about who is doing what with information technologies. We appear to be in a phase of technology push more than demand pull. But whether we believe that technology is technologically, economically, or socially constructed, it is generally the business of the social and behavioral sciences to understand the context in which a technology or set of technologies is thrust.
The behavioral sciences, which span everything from history to social psychology, rely on observation of behavior whether embodied in dusty census records or recorded by telemarketers. The implications of this type of research in the networked information environment are potentially substantial. The mental model, which assumes that the audience can be characterized statistically and in some sense commodified, represents an (if perhaps dubious) achievement in radio, where ways to measure and characterize the audience were developed. These methodologies subsequently migrated into television. Conclusions drawn from this research are embedded in programming, commercial, and regulatory decisions.
These methods are hardly without controversy in broadcast communications, and their utility in the networked information environment is questionable. The simplest example is the problem of inferring the numbers of users. All too frequently, the interpretation of server logs conflates usage with users, i.e., it makes one file request the equivalent of one reader. But national caching as a way of improving network performance means that readership in Australia, for example, is greatly underreported in the server logs of the home directory. Proxy
servers and bootleg mirror sites compound the problem, as do printing and hand-to-hand sharing of information whether in hard copy or on disks. There are a number of studies that are experimenting with ways around these issues (e.g., Carnegie Mellon University's HomeNet Project, which basically controls sample size and composition), but the issue of method seems to be one in which future research is warrantedthat is, research projects in which methods is the focus of the research and is not incidental to it. (''Cookies," code that resides on a user's computer, which is launched by the server when the user requests a file, have been proposed as a way around this problem. But the strategy is not without controversy.)
Accurate demographic characterizations are one dimension of use. Another is the cluster of issues typically subsumed into human-computing interactions, which is the subject of research at the University of Maryland's Human Computer Interaction Laboratory, the University of Michigan, and elsewhere. Many of these research efforts use the library context as an experimental setting and rely on two research traditions: (1) observation of users and (2) two information retrieval metricsprecision and recall. The former is typically limited to small samples and is vulnerable to oversimplification of the research design. The latter (precision and recall) were invented to assess the adequacy of indexing schemes and were subsequently adopted by researchers to evaluate searching behavior where the assumption was one of batch processing and the notion of iterative searching and query refinement did not exist. Neither metric considers user satisfaction but measures instead the relationship between what was found and what was available to be found.
In March of this year, Ron Larsen of DARPA called for developing new metrics to replace precision and recall, but more broadly, we have to ask how far the library paradigm can be pushed. That is, many information-seeking behaviors are captured in the way that people use libraries, both real and virtual. But the leap from the local reference desk to Yahoo or Excite deserves to be questioned, and the range of information-seeking behaviors requires attention. There is a clear bias in research projects toward research in academic settings, where so-called "experts" hold doctorates and "novices" are undergraduates. It is intuitively obvious that the way a physicist does a literature search is different from the way I might look for information on flight schedules. IBM has taken the approach that technologies appropriate for libraries will successfully migrate to other structured settings, notably corporations, where the economic pay-offs are potentially substantial. Nevertheless, in order for applications in the networked information environment to accommodate the variation in users and in uses (which is appreciated but not fully understood), much more research is required on information-seeking behaviors outside formal library and/or academic settings.
There is much in the future of advanced information technologies that will probably turn out to be familiar, including the need to understand the texture of use, whether we adopt a model of technology push or demand pull. However, the
tools by which we will do that as well as the shape and form of products and services to be offered are likely to change. In the near term, this will closely resemble contemporary research that evaluates users' demand for and satisfaction with products and services. But the specifics will change as content and applications evolve in ways we have yet to imagine.
Five Critical Issues Relating To
School of Law, University of Miami
The Argus State?
Decreases in the cost of video, audio, and other sensor technology, as well as cheaper data storage and information processing, make it likely that it will become practicable for both governments and private data-mining enterprises to collect enormously detailed dossiers on all citizens. This prospect raises a host of issues requiring research and debate. Among them:
• Who currently collects what data about individuals? How is it used? How is it shared? What are the trends?
• What are the existing default rules in different jurisdictions relating to the collection of information? Does the nature of default rules meaningfully alter outcomes? Do prohibitions on data collection (e.g., data protection laws), for example, affect outcomes? To what extent are existing rules vulnerable to foreign "data havens" and other regulatory arbitrage? To what extent do/will consumers choose alternatives to the default rules when such an option is available?
• What are the possible political, social, and economic consequences of extensive individual profiling? Is extensive profiling likely? Is the absence of a great deal of the privacy now taken for granted compatible with freedom? What difference does it make if the profiling is undertaken by (or available to) democratic governments? Non-democratic governments? Private industry? What would the economic and social consequences be of making profiling data available to some? To all? At a cost? At no cost? Would it be socially valuable to prohibit the creation of individualized dossiers? In an era of distributed databases, would it be technically practical to enforce such a prohibition?
• To what extent do different types of electronic cash and electronic commerce enable or disable profiling? To what extent do concerns about the control of electronic money laundering imply the power to restrict free speech or anonymous commerce? To what extent does the protection of free speech and a private social and economic space require the protection of anonymous speech and/or anonymous commerce? What are the current national policies regarding anonymous speech and commerce? In a networked world, what are the external and extraterritorial effects of one nation's policies regarding anonymous speech and commerce?
Legal Issues Affecting Digital Commerce
• Non-repudiation? How to find an accommodation between the stated requirement that (at least some) commerce based on certificate-verified digital signatures be non-repudiable (the X.509 tradition) and the traditional norms of consumer law in most countries which are designed to protect consumers from themselves as well as others (e.g., U.S. rules on credit card misuse).
• Choice of law issues. As it becomes clearer in at least some jurisdictions which domestic rules apply to certificates, digital signatures, and electronic commerce generally, the issue of selecting among, or meditating between, possibly conflicting rules in the various states and countries that could be associated with the transaction will become inescapable. A first step toward resolving these issues would be to undertake a considerable project of description, one that would look not only at the applicable substantive law, but also at the diverse choice-of-law rules that states might apply to transjurisdictional electronic commerce. With this in hand, it would be possible to identify more clearly the extent to which ecommerce actually contributes to "regulatory arbitrage" and the extent to which it merely replicates and expands existing practices.
• Certification authority policies issues.
It would be interesting to survey the content of existing certification authority (CA) policies (and background law) and especially to track them over time: Are they converging? Are they stratifying by quality of assurances offered to clients and relying partiesi.e., "race to the bottom" or "struggle to the top" or "product differentiation"? These data would inform any discussion of the regulation of CAs, as well as the debate over efforts to harmonize international standards.
One could also explore whether it is possible to design a standard semantics of CA policiesand perforce of applicable background national legislationthat could form the basis for users to set up rule-based decision making that could be built into e-commerce software. This software would allow the user to define properties that a certificate must have before it would be accepted. This would be an interesting case study of the potential for technological solutions to reduce (although they could not eliminate) the need for legal services, because currently users must make each decision manually for every certificate, either on advice of counsel or, if this is impractical for cost reasons, then based on the reputation of the CA.
The Economics of Trust and Reputation
It would be interesting and useful to know more about the psychology and economics of trust and reputation, between individuals and also between persons and institutions. Trust and reputation appear to be integral to the usefulness of networked communications between strangers. In an era of information overload
and rapidly growing numbers of Web pages of indeterminate quality, does the future belong to editors and other reputation-enhancing quality certification organizations? This issue has obvious implications for news and other current information. But it also applies to many other things, such as education, in which one can imagine "Internet model" degree-granting institutions that certify the quality of various distance learning courses, and attest to the rigor of exams or administer exams directly.
Interesting questions for both empirical and theoretical study include, To what extent can "trust" meaningfully be transferred? Can transitive trust be modeled? If A trusts B, to what extent does B's assertion that C is trustworthy actually induce A to trust C? How much (and in what circumstances) should A trust C? What indicia other than a naked statement (or certificate) could/should B offer A regarding C? The answer presumably varies enormously with the context: different assurances are required regarding, e.g., scientific bona fides, general sincerity, and creditworthiness.
Distributional Effects of the Internet on Work and Economy
Although there has been speculation about the distributional effects of the Internet on the labor force and on institutional providers of services, there seems to have been little empirical work. It would be interesting to do sectoral studies, looking at provision of services such as travel services and banking, and the retailing of "commodity" products such as CDs and perhaps a "made to order" product also, in order to determine what effect the Internet has on industrial concentration, and on employment patterns.
Rule Formation in (Partial) "Anarchy"
Until very recently, the Internet has developed its technical standards and social practices without much government intervention. Of course, the Internet developed against an elaborate background of regulation of telecommunications, electricity, and many other things necessary to its operation. Nevertheless, in important ways both the Internet Engineering Task Force (IETF)-based standards process and the "Internet norms" of Usenet, e-mail, and mailing lists evolved in a meaningfully anarchic way. To what extent is the IETF decision-making model transferable to other realms? Can the fundamentally consensus-based model survive the growth and commercialization of the Internet? Is it possible to educate (socialize) large numbers of new users? Does the method produce "better" decisions (and according to what metric)? Is it fairer? Is self-selection a viable method for decisions that do not relate to technical standards, and which in most cases apply to issues that are neither life-threatening nor of widespread salience? What sort of social practices is the vigilante-style Internet method of enforcing social norms suited to? Where is it inappropriate?
Cultural Influences On The Process
Impacts Of Computerization
Center for Social Informatics, Indiana University
Most of the popular, professional, and scholarly literature about computerization treats (1) computer-communication systems (CCS) as tools, (2) their adoption and use as parts of largely rational social processes, and (3) the impacts of CCS use as knowable by examining the social system of CCS users and the technical characteristics of CCS. This kind of conceptualization dates back (at least) to Leavitt and Whisler (1958) who argued that (upward reporting) organizational MIS would lead to a relative thinning out of middle managers and a resulting "hourglass shape" in the structure of organizations. Since the 1960s, the number of popular, professional, and scholarly books and articles about the social consequences of computerization have grown from a thin rivulet (1960s) to a stream (1970s) to a river (1980s) and now to a flood (1990s). The number of academic studies has grown at a modest pace and in the late 1980s and 1990s has been swamped by the volume of professional and popular writing.
In this seemingly vast literature, one can find a number of analytical strategies for understanding the social consequences of computerization. In popular and much of the professional writing it is easier to find elements of technological determinism. But in the academic studies, Leavitt and Whisler's technological determinism is relatively passé (although technological determinism reappears in various guises, such as the "cues filtered out" arguments in Sproull and Kiesler's CMC studies summarized in their book, Connections: New Ways of Working in the Networked Organization (1992).
In scholarly studies, the main alternatives to technological determinism have been various contingency analyses (with different stances about which social and technological factors are important contingency makers) and nondeterministic social process theories (such as Markus and Robey's (1988) "emergent process theories"). For example, deterministic analysis would hold that documentary networks such as Lotus Notes would encourage the sharing of information within organizations, whereas a contingency analysis would treat the organizational incentives for sharing information within specific groups as an important influence on the ways that Notes would be used in specific organizations.
In addition, some analysts have noted the ways that CCS of a given generic type can be implemented with important technical variations and social conventions
for using them. For example, electronic forums differ in such respects as who has access to them, who controls their content, the nature of documentary archiving, the technological complexity faced by people who use them, and so on.
Most people who use CCS know about their possibilities through a combination of personal experience and accounts in the professional and popular media. When professionals were the primary users of CCS, the roles of computerization movements (Iacono and Kling, 1996) were specially important, although usually overlooked. Today, popular usage alters the social processes by which people come to learn about the possible social roles of new forms of CCS.
It is hard to estimate the role of the mass media in popularizing the Internet and the WWW in 1993-1994; but the enthusiasm of reporters and the sudden broad visibility of URLs in national media in 1994-1995 should not be ignored. However, ideas about the social shaping of CCS and social influences on their use have diffused rather slowly into professional and popular writing.
Many of the questions addressed in this workshop are answered, in part, by beliefs about how the public (including various professionals) will use CCS (and related services). I believe that we should take the popular cultural representations of CCS as serious influences on the ways that people will use these systemsand on their likely social impacts. These representations are not always homogeneous, and they will change over time (for example, from CCS as "giant brains" in the 1950s through "productivity machines" in the early 1980s through "communication media" in the 1990s). However, as vendors develop new technologies and services, the media play important roles in giving them meaning and popularizing them (e.g., how did Java become so popular so fast?).
Questions For Research
Department of Economics, and School of Information, University of Michigan
What is currently known? What questions need to be addressed?
Costs are falling exponentially for technologies built primarily with silicon and sand: computing cycles and bandwidth. The decline in data storage costs would also seem remarkable but for the comparison.
Almost, but not quite the same thing: technological progress in these areas is accelerating. (Possible research question: Is it? How is it measurable?) Ignoring cost, remarkable new things are possible each year. (A thousand IBM 360s connected with RS-232 cables would not a parallel-processing supercomputer have made.)
We have a long history of adapting to falling costs and technological progress. But we are not well adapted to such fast change. In the context of our history and institutionssocial, political, cultural, at leastsuch rapid advancement is deviant. Deviancy threatens existing institutions.
Institutions (conventions; standard practices; social, business, and political norms) evolve to deal with problems that undermine the ideal of a competitive market equilibrium: positive externalities (standardization), public goods (government provision), and transaction costs (default rules, social conventions). But when relative costs and technological opportunities change rapidly, the problems that the institutions solved are no longer the same.
Problems are changing rapidly, but institutions change slowly and reluctantly. New problems, old institutions: things break, or progress is delayed. Examples:
• International spectrum allocation: need for global bandwidth reservations for low earth orbit satellites and other wireless networks.
• Governance of the Internet: need for assignment of domain names and Internet protocol numbers, routing policy, content control.
• International banking and currency control.
• International taxation, currently largely source-based: Where is cyber activity taking place? How easy will income shifting to find a low-tax-rate base become?
• Church, school, and other local community institutions being challenged as core communications channels for shared values, culture, and social norms. Rise of disembodied, asynchronous "community" (e-mail, Usenet, special interest groups). Paradox of improved communications channels increasing balkanization?
So, at least one set of fundamentally important questions for research involves looking beneath specific impacts to uncover the institutional structures, assumptions, and rigidities that are becoming dysfunctional, and then considering how to facilitate the transition to new institutions that are likely to accommodate the effects of exponential decreases in the costs of sand and silicon.
• What government core institutions underlie market interventions, subsidy and tax policies, and trade policies? What educational structures? What legal institutions?
• What do we take for granted about intellectual property (before we get to the question of protection)?
• What mechanisms for establishing trust, evaluating, authenticating, and providing assurance underlie conventional commerce, and how can a system of trust be evolved for electronic commerce?
• What law applies to artificial agents who participate in information exchange? What socially acceptable policies exist for dealing with deadly threats to the public health like outbreaks of Level IV computer viruses (Ebola-PC, Ebola-Mac)?
• What does universal service mean? When should government treat emergent network services with large potential positive network externalities as public goods that should be subsidized?
• Good advice: Assume CPU cycles and bandwidth are free. What then?
What will be useful methods to determine answers to such questions?
The cycle of change strains some traditional methods. It is hard to get data from "natural experiments" on which generalizable hypotheses can be tested. For example, Internet congestion seems to be a problem. Various approaches to allocating scarce, easily congested resources have been proposed, including different types of usage-sensitive pricing. Lots of concern: Will this increase information inequality? Squelch creative explosion of Internet applications? Slow adoption? Chase away independent, voluntary provision of content in exchange for industrialized creation and control of mass-market content? Some fundamental research questions: How much consumer surplus is lost due to congestion? (How much does waiting "hurt"? What applications are we not getting to use because they can't tolerate unpredictable congestion, and how much are those worth to us?) How would different classes of users respond to usage-sensitive pricing (if it constituted a small fraction of their consumption
budget)? Thus, would be benefits (of less congestion in current services, and new services enabled with guaranteed quality of service) outweigh the adverse effects on adoption rate and social externalities of communication, reduced innovation, change in content, change (not necessarily increase!) in information inequality?
To answer these questions, we might normally run consumer demand studies to estimate user valuation of various service qualities at different prices, looking for natural experiments to assess the value of social externalities.
The problem: no data! And even as data start to become available, the data-generating process is nonstationary (stationarity is a prerequisite for classical statistical estimation and analysis): new services are introduced, users are on a learning curve, participation externalities are riding up the adoption curve. Example: How much do we learn about future Internet demand if we study pre-WWW demand? And if we wait to observe, strong network externalities and resulting standardization may lock us into policies and standardized solutions that are inefficient, inflexible, and limiting (e.g., Wintel architecture; the "mistakes" of QWERTY and VHS standards). The traditional pace of research and institutional adaptation is too slow.
Possible implication: Social science research may need to do more field and lab experimentation, rather than waiting around for the real world to toss up natural experiments.
There may also have to be some merger between traditional social science and engineering methodologiessome attempt to learn from results that are not fully general, developed, and rigorously tested following a modernist hypothesis testing method. Thus, look to findand designsystems, policies, and institutions that "just work." Think about how to make them work better, without clinging too tightly to the "optimality" paradigm. Internet litmus test: "running code that works."
Likewise, traditional conceptual structures may need reworking.
Many observersbut not economists for the most parthave suggested that "traditional economics is dead," that there is a "new" economics of information. Yet the "special" features of information problems are familiar in economics: high fixed costs plus low variable costs, congestion externalities, positive network externalities, and tipping. What may be new is that several of these become simultaneously significant, and for a greater, more essential share of exchange. We are used to thinking of these and designing policies for them as special cases.
Nonetheless, we should not blithely discard hard-won principles. For example, some would have it that soon bandwidth will no longer be scarce: it will be infinite (effectively) and free. Not by the laws of physics, of course. Has anything ever become infinite and free? No, just relatively less scarce. It seems still very useful to study the relative scarcity of different resourcessilicon, sand, labor, creativity, attentionand to focus on how relative scarcity is changing.
Where the change in scarcity is occurring is where the opportunities and problems lie. The end of scarcity is a red herring.
A few areas on which to focus research:
• Information warfare: survivability of communications networks (civilian as much as military); institutions and policies for response to transnational terrorism and criminality (that uses or attacks information infrastructure);
• Artificial agent economies: how to harness the efficiency, stability and robustness of competitive economies for real-time management and control of complex systems (electric grids, telecommunications networks, smart highways, spread-spectrum bandwidth allocation); and
• Evaluation and social filtering: the economics of attention, trust, and reputation. Funding models for information and information services, and their effect on the creation and distribution of content.
The Internet offers new opportunities both to support and to study interactions among people who do not know each other very well. I believe that recommendations, trust, reputations, and reciprocity will play important roles in such interactions and thus deserve attention from interdisciplinary research teams.
There are interesting topics in all stages of commercial interactions, from search processes to negotiation to consummation of transactions:
• Recommendations and referrals can help people to find interesting information and vendors. There is a need for continued research on techniques for gathering and processing recommendations (this is sometimes called collaborative filtering). Compilation of "grand challenge" data sets of recommendations would help this field advance.
• The structure of negotiation protocols and the availability of information about past behavior of participants will affect the kinds of outcome that are possible. Economists have theoretical results regarding many simplified negotiation scenarios, but there is a need for interdisciplinary research to apply and extend these results to practical problems of protocol design.
• Finally, in the transaction consummation phase, much effort has focused on secure payment systems. Some transactions, however, require a physical consummation (mailing of a product, for example) and hence must rely on trust in some form. Research can explore the role of reputations in creating trustworthy (though not completely secure) contract consummation. Such transactions may also have lower transaction costs than secure payment systems, even in the realm of purely electronic transactions.
Noncommercial electronic interactions also offer many interesting opportunities. Electronically mediated interactions are visible and available for analysis in a way that face-to-face interactions typically are not. For example, "softbots" could scour the Web to create various graphs of relations between people and information resources. Social network theorists have already devised a number of techniques for analyzing such graphs. One possible application would be to hypothesize about and then analyze the credibility of information sources in
various parts of a social network. Another possible application of network analysis would be to analyze the flow of reciprocity (or gift exchange, as Esther Dyson put it) and perhaps devise ways to increase a social network's level of reciprocity.
In the last couple of years, I have become particularly interested in the concept of social capital, as articulated by James Coleman, Robert Putnam, and others. Social capital is a resource for action that inheres in the way a set of people interact with each other. I'm still struggling for various ways to connect this concept to specific research questions and projects. Some of the ideas above are born from those struggles, and I'd welcome any project ideas or new ways of thinking about these problems.
Social Impact Of Information Technology
University of Michigan
A great deal of attention has been given to new information technology as the main empirical force changing the wage structure and giving rise to wage inequality. Yet something on the order of skill-biased technical change is usually given no formal representation. The theory that could actually explain the changing wage structure is some type of unbalanced growth model. In fact the theory that could apply is not too hard to imagine. It is a closed economy "trade" model with "biased" technical change (Johnson and Stafford, 1998). Skilled and unskilled workers produce different goods. Suppose that there are three goods. Throughout, skilled workers produce Good A (professional services, most obviously), and less skilled workers produce Good C (including basic retailing). Initially, let us suppose that there is a large Good B sector, such as manufacturing and some other services, produced by less skilled workers. Then the new technology appears. It improves the ability of skilled workers to produce the Good B output, previously the domain of the less skilled workers.
What in general will happen to the equilibrium when this skill-biased technological progress occurs? The average real wage will rise, but the skilled workers will get more than 100 percent of the benefit, implying that the real wage of less skilled workers will fall. In contrast, if the new technology had allowed the skilled workers to be more productive at their traditional specialty (Good A), then the real wage of all workers would have risen.
A model of this simple sort would go a long way in organizing thought about some of the patterns reported in the literature on changing wage structure. Skilled workers have been substituted for less skilled workers in many Organisation for Economic Cooperation and Development manufacturing industries, for example. In that (Good B) industry there has been a rise in the ratio of nonproduction to production workers, and overall growth in manufacturing productivity has been strong. In contrast, high-skill service-sector (Good A) productivity growth has been generally slow. One need only think of higher education and legal services (and possibly medicine) as cases in point. The terms of trade within the domestic economy could be defined as the prices of goods produced by skilled workers and others. The price of the Good B sector has fallen because of biased technical change, and as additional less-skilled workers become available to produce more
traditional Good C products such as retail services, they experience deteriorating terms of (internal) trade. For some countries with rather little trade, such as the United States, the closed-economy aspect of such a framework is most empirically relevant. For other countries, such as Japan, both trade and external as well as internal technological effects will be important to incorporate in an assessment of wage pressures.
Consider the price of tuition and the price of routine health care assistance provided by home health care aides. Data from the Bureau of Labor Statistics wage series show the latter to have been falling below the level of inflation since 1973. On a more optimistic note, if the new technology can be applied to improve the productivity of skilled workers in their traditional domains, both skilled and unskilled workers would be better off. The new information technology is so far helping the nonmarket productivity of skilled workers: use of the Internet will be providing a huge array of services via the household sector. Data available to study this aspect of technical change are close to nonexistent. The real standard of living may come to depend more on the nonmarket sector. We have developed a methodology for studying the value of nonmarket output though the use of time-use diary data, based on a grant from the National Science Foundation in the mid 1970s and early 1980s. We are currently studying the access of children under the age of 12 to information technology with time-use diaries both in the home and in schools. The data are being collected as a special supplement to the Panel Study of Income Dynamics, funded by the National Institute of Child Health and Human Development. Copies of our instruments are available at <http://www.umich.edu/˜psid/>.
The Uncalming Effects Of Digital
Xerox Palo Alto Research Center
The important waves of technological change are those that fundamentally alter the place of technology in our lives. What matters is not technology itself, but its relationship to us.
In the past 50 years of computation there have been two great trends in this relationship: the mainframe relationship and the PC relationship. Today the Internet is carrying us through an era of widespread distributed computing toward one of ubiquitous computing, characterized by deeply imbedding computation throughout the world. Ubiquitous computing will require a new approach to fitting technology to our lives, an approach we call "calm computing." Calm computing is not a natural result of increased use of technologyin fact unbridled digital technology naturally decreases calm.
Imagine the following experiment; or if you are brave, try it. Find two empty cardboard toilet paper tubes, and tape them over your eyes so that you are looking out through them. You now have no view up, down, left, or right, only a narrow cone of view straight in front. Now walk. What happens? You have lost the flow of information from the periphery into the center, and have only the center. Everything that you see is a surprise, because it just pops in without warning. Your head must constantly swivel or you will trip, run into things, miss people passing you, and generally bumble.
If you wear toilet paper tubes for a few hours you will feel exhausted and highly anxious. Your head will have been constantly swiveling to try to partially compensate for the lack of peripheral vision. You will feel overloaded with all the work you did to keep up with your world. You will be emotionally drained by all the surprises when things popped into view and when you had to compensate for the unexpected.
Wearing toilet paper tubes is like living in the digital age, where the feeling of exhaustion is called "information overload." Digital technology, like toilet paper tubes, tends to deliver information with a set of biases. These biases push us toward the center of our awareness and tend to leave out the essential periphery that helps us make sense of and anticipate the world around us. More and more of the economy and business and life are mediated through digital technology. If we lose the periphery, we may be smarter about whatever is right in front
of us, but stupid to the point of ignorance about what is nearby but out of sight behind the toilet paper tube.
Proper action has always meant keeping the periphery and center in balance. The center is the domain of conscious, symbolic thought and action. The periphery is the domain of flow, of context, of intuition, and of understanding. The center is the domain of explicit knowledge of what to do, the periphery the domain of knowing how to do it. Take away either of these and near paralysis results.
There are 10 biases in today's digital technology that contribute to unbalancing center and periphery. These are saying, homogenizing, stripping, reframing, mono-sensing, deflowing, defamiliarizing, "uglying," reifying, and destabilizing.
1. Saying names the tendency of digital technology to make everything explicit.
2. Homogenizing is the delivery of digital information at an ASCII monotone that puts all information into the same pigeonhole.
3. Stripping is the loss of social context and frame that frequently comes with digital transmission.
4. Reframing results because there is always a social context and frame, and after stripping, a confusing or illegitimate context may result.
5. Mono-sensing is the emphasis on the eye over all other senses, reducing our inputs, our style, and our intelligence.
6. Deflowing is the loss of the context that lets us enter the "flow state" of greatest intelligence and creativity, and so reduces our anticipation and history.
7. Defamiliarizing is the loss of familiar social practices as we try to work and live on the net.
8. "Uglying" names, with an ugly word, the uncomfortable feeling with which the low state of design in digital technology leaves us.
9. Reifying results when implicit practices are cast in stone, removing the white space that lets anything work, as when a company puts all its processes online.
10. Destabilized is our emotional state after buffeting from all the above.
The above add up to a bias toward the center, and away from the periphery.
Understanding the power of balance between focus and periphery, and caring about both, can be a tremendous source of advantage in the digital age. Digital technology, through its homogeneous, ubiquitous, and voluminous provision of information, can enable an even richer periphery for action. The danger comes if we believe that only focus is effective. Trying to focus on the increasing volume of bits can overwhelm us, and we can badly misuse our full intelligence by ignoring attunement, community, and peripheral awareness. The opportunity for focus is greater than ever before, but only if we recognize that there is no focus without periphery, there is no center without a surround. If we can stay in balance, we can expect a world of greater satisfaction and effectiveness.
There was a problem loading page 162.