Page 131

B
Position Papers Submitted by
Workshop Participants



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 131
Page 131 B Position Papers Submitted by Workshop Participants

OCR for page 131

OCR for page 131
Page 133 Research On Information Technology Impacts Paul Attewell Graduate School and University Center, City University of New York A few comments about past impact research are appropriate at the outset, because there are some important issues that ought to be considered before moving on to new topics and research priorities. We made some intellectual mistakes in the past that we can avoid repeating in the future. Since some of the workshop participants are new to this area of study, I think it would be helpful to lay out some of these issues. Because technological change in information technology applications has been so rapid during the last 25 years, there has been a constant temptation to turn away from studies of current outcomes of existing information technologies and instead turn toward a kind of futurology or speculative stance about what might be the case in the future. Examples are found in the agenda for this workshop, where topics are articulated such as: "How will the nature of the employment contract change? … What will be the impact on K-12?" The future is important, and these kinds of questions are valid, but this stance has had some unfortunate implications for impact research in the past. Among these are the following: • A tendency to support the development of theoretical models to predict what will be or might be the case, rather than pursue empirical studies of what actually is happening now. Since theorizing tends to be cheaper than data collection, this has tended to skew funding toward the former, and has often given this field a rather speculative feel. But speculation, even by very smart people, has often been far off the mark. • A tendency to fund studies of "cutting-edge" applications, which tend to be located in large, dynamic (and resource-rich) firms, or superior schools, rather than looking at the kind of "ordinary" IT that is in place in average workplaces and ordinary schools—the point being that what one observes in the largest, most resource-rich, and most committed settings is not a good predictor of the typical effects of a technology in the larger world. (In the field of program evaluation there is a parallel phenomenon known as the demonstration effect: innovative programs are shown to work well initially in well-supported demonstration projects but then prove much less effective when widely implemented in more

OCR for page 131
Page 134 ordinary settings.) Studying cutting-edge applications in cutting-edge firms, schools, or universities is fascinating, but this is not the most rigorous approach to understanding how technological change is affecting the larger population of organizations and people. It tends to result in unrealistic scenarios. • A tendency to direct money into prototyping new applications, and to rely on the authors of the prototype to do the impact assessment or performance evaluation themselves or not do an evaluation at all. This has occurred all over the "computer assisted cooperative work" field, as well as in the field of educational software. It is naive to expect that people who toil over developing new technology can provide an objective assessment of its performance, yet this approach dominates research. There is no equivalent in IT research of the "double blind" study, and replication is rare. Perhaps this is one arena in which engineers and social scientists could collaborate: if federal funders insisted that all prototype and development projects include an arm's-length performance assessment, this would be a major step forward. • A tendency to discount findings that demonstrate negative or null impacts of IT as being intellectually uninteresting on the grounds that such impacts simply reflect early versions, or start-up problems, which will disappear when the next generation of machinery or software comes online. Studies indicate that there are large discontinuance/abandonment/non-use rates for important and much-hyped IT products. (Examples are Kemerer's recent studies of abandonment of fourth-generation software development tools and Detroit's ripping out advanced automated manufacturing from some plants.) Users of "what if" decision-tools have been found to use them mechanistically even when shown that they are producing inferior decisions. Computer searches of databases using current methods have been shown to generate large numbers of bad hits, and also to miss large numbers of relevant items. If IT impact research were a normal social science discipline, such striking findings would be viewed as important scientific puzzles, unleashing a stream of follow-up research seeking insights into human-computer interactions implied by these failures. By and large this has not occurred, because of a widespread mentality that says that any problems with IT that impact studies unearth are simply minor implementation issues and will be overcome by the next generation of technology. This mentality reminds me of the anthropologist Evans-Pritchard's studies of Azande witchcraft. Whenever he contrived to show African believers in witchcraft that casting a spell/curse on someone did not work in a concrete case, the believers were unshaken, retorting that of course witchcraft worked, but that the spell had been performed poorly in this case. IT does work, but impact research should spend much more time looking at the many settings in which it works very differently than intended, and should mine these cases, as well as the successes, in order to understand the full picture.

OCR for page 131
Page 135 • One of the most common findings in prior IT impact studies has been that outcomes are far from uniform across all kinds of settings and contexts. In earlier years we looked for the impact of IT on (say) organizational centralization, and scholars tended to hew to one end or the other of a bipolar spectrum: centralization versus decentralization; upskilling or deskilling; job destroying versus job creating. What scholars found, in almost every case, was that this was an unproductive way to conceptualize the issue. One almost always found evidence of both extremes of outcomes/impacts as well as many points in between (see Attewell and Rule, 1989). We finally realized that we were asking the wrong question. We should have asked, In what contexts does outcome A typically predominate, and in what contexts does outcome B tend to prevail, and when does one see A and B in equal measure? We found that a technology does not usually have an impact. The context or setting in which the same technology is used often produces strikingly different "impacts." This phenomenon has been discussed in terms of "Web Models" (Kling), or "structural contingency theory" (Attewell), or Robey's ''Plus Ca Change" model. All imply that we fully appreciate the role of context in technology outcomes, and that we therefore expend sufficient research effort to measure the context, and to delineate its interactions with the technology. If we fail to do this, we return to the old "black box" paradigm, that is, attempting to measure only the input (say, a particular software program) and the outcome (say, kids' test scores) without bothering with the context (the classroom, the kids' family backgrounds) or the causal mechanisms. Black box research on impacts often discovered "inconsistent" outcomes across studies but proved unable to show why there was so much variation, because it neglected to measure the contextual variables that were moderating the effects of the input upon the output. For example, the old paradigm would phrase a research question so as to ask whether or not home PCs would improve kids' school performance. In contrast, research within the current contextual paradigm would ask under what conditions having PCs at home affects students' school outcomes. A piece of my own work has indicated, for example, that having a home PC currently has a minimal effect on the school performance scores of poor and minority kids but is associated with substantial positive effects on the school performance of kids with high socioeconomic status (SES), when other factors are controlled for (Attewell and Battle, 1997). Race and class/SES, in this example, prove to be very important contextual features moderating the impact of home PCs on school performance. • Workshop organizers should be aware that because of the last three decades of research and the importance of context as discussed above, many distinguish ed scholars of technology avoid the term "technology impact." Using this term in framing the question would be viewed by some of them as indicating an ignorance of the body of scholarship in technology studies. For them, the term "impact" connotes a kind of technological determinism that is very dated and

OCR for page 131
Page 136 widely discredited. Personally, I am not so averse to the term "impact," but I do agree with their larger point about avoiding models based on simple technological determinism. Distilling these arguments into positive recommendations: (1) future research should pursue empirical studies of existing technologies in real settings, as distinct from speculative or purely theoretical exercises; (2) care should be taken to include representative organizations/settings, not just cutting-edge or high-tech ones; (3) studies of unintended consequences of IT, such as failures and discontinuance, are important for what they tell us about these technologies and about the process of change more generally. Researchers should be interested in the full range of "impacts"—intended and unintended; (4) projects aimed at developing technology prototypes should routinely include a performance assessment or evaluation, and the latter should be conducted at arm's length from the former; (5) contextual variables should be studied rigorously, and their moderating effects on technology outcomes should be a major part of inquiry; (6) we should reconceptualize what we are doing as social and economic studies of computing and communications technologies rather than technology impact studies, and try to avoid technological determinism. To move to the request about specific areas for research, here are some suggestions: 1. The "productivity paradox," in my opinion, remains an important and unresolved issue. However, I suggest that we should move beyond dichotomous thinking (Does information technology have a payoff, or not?) and ask, In what areas/applications/settings do we see payoffs and in what areas don't we, and why? What mechanisms can be identified that attenuate potential payoffs, and how do we measure them? What interactions and contexts explain variation in productivity outcomes? 2. Skills. There is anecdotal evidence that the range in performance levels in computer-related work is greater than that found in noncomputerized tasks. In other words, the gap between skilled and mediocre users is larger in computer-related work. This suggests that skills in computer work are less well diffused, or are shared less than in other kids of tasks. We need research on what constitutes skilled versus unskilled performance in computer work of various kinds, and a better understanding about why so many of us make mediocre use of these tools. 3. Teenagers. I suspect that personal computers are changing the lives of teenagers more than most other age cohorts, and that is both an opportunity and a concern. Computerized communication affords powerful opportunities for social affiliation (e.g., Sproull and Faraj, 1995) and for playing with identity, both preoccupations of adolescence. There have already been studies that suggest that teenagers are spending less time watching TV and more on the Web. There are a

OCR for page 131
Page 137 host of policy issues surrounding their use. But our knowledge base of how young people are using the Web, and what they are getting out of it, is too sparse. 4. Education. As a researcher I find the literature on educational computing quite maddening. There are exciting claims of accelerated learning using computerized tools, but the research rarely gets replicated, and even the most lauded programs (e.g., the algebra tutor at Carnegie Mellon) never seem to cross into public use, in part because these prototypes are built on UNIX platforms in esoteric languages. As a result the field does not progress in a cumulative manner. There is clearly room for a serious review and analysis of the state of the art in educational software, and for research on the barriers to future progress of IT in education. Universal access to the Web is the only area I know that has received systematic treatment.

OCR for page 131
Page 138 What If All Information Were Readily Available To All? Joseph Farrell Department of Economics, University of California, Berkeley Rapid improvements in information technology raise two grand issues. First, are we moving toward a world in which, to a reasonable approximation, all "information" (not, of course, the same as knowledge) is readily available to all, or are there major obstacles in the way that may prevent us from getting to that point? For instance, is there no such thing as "all information" relevant to a particular topic? Are standards problems, intellectual property rights problems, database search limitations, or other issues likely to bound us well away from that "all information available" state? Second, if we do get to that state, what will it be like? Much of today's employment consists of clumsily dealing with information. Will the demands of more information be greater or less? If the problem gets "solved" rather than just increasingly addressed, what are the other main things that need to be done in an advanced society—in other words, what will today's information manipulators do instead?

OCR for page 131
Page 139 Critical Issues Relating To Impacts Of Information Technology: Areas For Future Research And Discussion Alexander J. Field Santa Clara University There are several key issues that concern me as a scholar. First, as an economic historian and as someone who looks retrospectively as well as prospectively, I believe we face a major issue involving the archiving of data. There are two main issues. People say about magnetic media that it lasts 5 years or until it wears out, whichever comes first. That is probably a bit pessimistic. But even if the media persist, what about the input-output devices? It is getting more and more difficult to find a 5.25-inch drive, and woe to him or her who has data on 8-inch floppies! Tape backups are sometimes even worse. New backup software is sometimes not backwards compatible, so that one needs old copies of backup software as well as a compatible tape drive in order to restore data. We need mechanisms to ensure the retrievability of records that previously would have been stored as printed records. This issue is at least as important for individual records (both personal and professional) as for those pertaining to the corporation or organization as a separate legal entity. Whatever media are used, we need them to be at least as durable and stable as microfilm. Ideally these media should be relatively inexpensive, and equipment to read and/or write on them should be standardized and widely accessible. Will individual and private enterprise be sufficient to ensure retrievability? Is there an externality in terms of ensuring access that would warrant government subsidy or intervention in this area, perhaps as part of the activity of the National Institute of Standards and Technology? A second issue: As a scholar I look forward to tremendous opportunities in terms of the archiving of old journal runs. This has an enormous potential capital savings impact (consider the linear feet of bookshelves in faculty offices that might be liberated). I look forward to being able to access 100-year runs of journals such as the American Economic Review, from CD-ROM or over a network through software such as Adobe Acrobat (thus text is searchable). I see this as less important for books and monographs (where being able to read through an entire volume, which presumably has some coherence) will still be desirable. Nevertheless, the ability to search the text of scholarly monographs would be useful. The cost of converting newly published material to this form will be small, since most of it now exists in machine readable form before it goes to be typeset. The real challenge will be older works. There is a potential for enormous

OCR for page 131
Page 140 efficiencies here in terms of research libraries and scholarly research. But who will pay? Is there a role for the Library of Congress? Can we get to the point that interlibrary loan involves the simple downloading of a large file? Will scholars assemble libraries of CD-ROMs attached to personal computers? Will they invest in juke boxes so that the disks are available and retrievable when needed? (CD-ROMs can be as inconvenient as the computers were prior to the advent of hard disks—one can never seem to find the disks when one needs them, and their smaller size renders them more vulnerable to misplacement than books.) Or will the material be available through servers in libraries or over commercial networks? Obviously, copyright issues are relevant for recently published works, but I am interested in materials for which copyright is no longer relevant. How will this affect the publishing business? Finally, let me comment on ways in which new instructional technologies will affect the craft of teaching. I believe firmly that advances in information technology will play an important role in complementing rather than eliminating traditional classroom instruction. The advent of television and the video tape recorder were both heralded as sounding the death knell of traditional instruction. There is no evidence that this has occurred, nor that recent advances will have this effect either, any more than computers have eliminated the use of paper or videoconferencing facilities have spelled the demise of the 747. The effective instructor acts in a complex mixture of roles. In one role the instructor is a supplier of services to students (particularly when they are enrolled in course work beyond the age of compulsory schooling laws). In terms of this relationship students are in a real sense customers. But the effective instructor occupies another role as well—as, in a sense, a supervisor of students, and plays a role in motivating, encouraging, evaluating, and developing students that is totally foreign to the service provider-customer model. For any topic there will always be a small percentage of prospective students with the necessary background, motivation, and self-discipline to learn from self-paced workbooks or computer-assisted instruction. For the majority of students, however, the presence of a live instructor, will, in my view, continue to be far more effective than a computer-assisted counterpart in facilitating positive educational outcomes, just as for most work relationships, a live supervisor is going to be more effective than a computer replacement. The most important impact of information technology will likely occur in increasing the productivity of the hours students spend outside of the classroom. Several years ago many universities, including my own, built computer classrooms with networked computers for every one or two students. While these have proved effective for training in the use of various kinds of software, in most cases they proved disastrous for standard classroom instruction. The computers created line-of-sight obstacles between the instructor and students, and students could sometimes not resist the temptation to play computer games during class time. In some instances such labs have been ripped out. Nor am I persuaded that

OCR for page 131
Page 141 the increasing use of presentation software on average improves the efficacy of classroom communication. The dimming of lights and the focusing of attention on an overhead screen distracts attention away from the facial expression and body language of an instructor, which gives away two of the most powerful benefits of live instruction. Expensive overhead cameras that convert documents to a video feed currently have lower resolution than standard overhead projectors. The greatest potential for new information technology lies in improving the productivity of time spent outside the classroom. The norm of accrediting agencies is 2 hours' outside work for 1 hour in class. Making syllabi, solutions to problem sets, and, where copyright law will permit, assigned reading materials available on an inter- or intranet offers tremendous convenience. E-mail and more sophisticated groupware vastly simplify communication between students and faculty and among students who may be engaged in group projects and face enormous logistical challenges in setting up group meeting times.

OCR for page 131
Page 152 Questions For Research Jeffrey K. MacKie-Mason Department of Economics, and School of Information, University of Michigan What is currently known? What questions need to be addressed? Costs are falling exponentially for technologies built primarily with silicon and sand: computing cycles and bandwidth. The decline in data storage costs would also seem remarkable but for the comparison. Almost, but not quite the same thing: technological progress in these areas is accelerating. (Possible research question: Is it? How is it measurable?) Ignoring cost, remarkable new things are possible each year. (A thousand IBM 360s connected with RS-232 cables would not a parallel-processing supercomputer have made.) We have a long history of adapting to falling costs and technological progress. But we are not well adapted to such fast change. In the context of our history and institutions—social, political, cultural, at least—such rapid advancement is deviant. Deviancy threatens existing institutions. Institutions (conventions; standard practices; social, business, and political norms) evolve to deal with problems that undermine the ideal of a competitive market equilibrium: positive externalities (standardization), public goods (government provision), and transaction costs (default rules, social conventions). But when relative costs and technological opportunities change rapidly, the problems that the institutions solved are no longer the same. Problems are changing rapidly, but institutions change slowly and reluctantly. New problems, old institutions: things break, or progress is delayed. Examples: • International spectrum allocation: need for global bandwidth reservations for low earth orbit satellites and other wireless networks. • Governance of the Internet: need for assignment of domain names and Internet protocol numbers, routing policy, content control. • International banking and currency control. • International taxation, currently largely source-based: Where is cyber activity taking place? How easy will income shifting to find a low-tax-rate base become?

OCR for page 131
Page 153 • Church, school, and other local community institutions being challenged as core communications channels for shared values, culture, and social norms. Rise of disembodied, asynchronous "community" (e-mail, Usenet, special interest groups). Paradox of improved communications channels increasing balkanization? So, at least one set of fundamentally important questions for research involves looking beneath specific impacts to uncover the institutional structures, assumptions, and rigidities that are becoming dysfunctional, and then considering how to facilitate the transition to new institutions that are likely to accommodate the effects of exponential decreases in the costs of sand and silicon. • What government core institutions underlie market interventions, subsidy and tax policies, and trade policies? What educational structures? What legal institutions? • What do we take for granted about intellectual property (before we get to the question of protection)? • What mechanisms for establishing trust, evaluating, authenticating, and providing assurance underlie conventional commerce, and how can a system of trust be evolved for electronic commerce? • What law applies to artificial agents who participate in information exchange? What socially acceptable policies exist for dealing with deadly threats to the public health like outbreaks of Level IV computer viruses (Ebola-PC, Ebola-Mac)? • What does universal service mean? When should government treat emergent network services with large potential positive network externalities as public goods that should be subsidized? • Good advice: Assume CPU cycles and bandwidth are free. What then? What will be useful methods to determine answers to such questions? The cycle of change strains some traditional methods. It is hard to get data from "natural experiments" on which generalizable hypotheses can be tested. For example, Internet congestion seems to be a problem. Various approaches to allocating scarce, easily congested resources have been proposed, including different types of usage-sensitive pricing. Lots of concern: Will this increase information inequality? Squelch creative explosion of Internet applications? Slow adoption? Chase away independent, voluntary provision of content in exchange for industrialized creation and control of mass-market content? Some fundamental research questions: How much consumer surplus is lost due to congestion? (How much does waiting "hurt"? What applications are we not getting to use because they can't tolerate unpredictable congestion, and how much are those worth to us?) How would different classes of users respond to usage-sensitive pricing (if it constituted a small fraction of their consumption

OCR for page 131
Page 154 budget)? Thus, would be benefits (of less congestion in current services, and new services enabled with guaranteed quality of service) outweigh the adverse effects on adoption rate and social externalities of communication, reduced innovation, change in content, change (not necessarily increase!) in information inequality? To answer these questions, we might normally run consumer demand studies to estimate user valuation of various service qualities at different prices, looking for natural experiments to assess the value of social externalities. The problem: no data! And even as data start to become available, the data-generating process is nonstationary (stationarity is a prerequisite for classical statistical estimation and analysis): new services are introduced, users are on a learning curve, participation externalities are riding up the adoption curve. Example: How much do we learn about future Internet demand if we study pre-WWW demand? And if we wait to observe, strong network externalities and resulting standardization may lock us into policies and standardized solutions that are inefficient, inflexible, and limiting (e.g., Wintel architecture; the "mistakes" of QWERTY and VHS standards). The traditional pace of research and institutional adaptation is too slow. Possible implication: Social science research may need to do more field and lab experimentation, rather than waiting around for the real world to toss up natural experiments. There may also have to be some merger between traditional social science and engineering methodologies—some attempt to learn from results that are not fully general, developed, and rigorously tested following a modernist hypothesis testing method. Thus, look to find—and design—systems, policies, and institutions that "just work." Think about how to make them work better, without clinging too tightly to the "optimality" paradigm. Internet litmus test: "running code that works." Likewise, traditional conceptual structures may need reworking. Many observers—but not economists for the most part—have suggested that "traditional economics is dead," that there is a "new" economics of information. Yet the "special" features of information problems are familiar in economics: high fixed costs plus low variable costs, congestion externalities, positive network externalities, and tipping. What may be new is that several of these become simultaneously significant, and for a greater, more essential share of exchange. We are used to thinking of these and designing policies for them as special cases. Nonetheless, we should not blithely discard hard-won principles. For example, some would have it that soon bandwidth will no longer be scarce: it will be infinite (effectively) and free. Not by the laws of physics, of course. Has anything ever become infinite and free? No, just relatively less scarce. It seems still very useful to study the relative scarcity of different resources—silicon, sand, labor, creativity, attention—and to focus on how relative scarcity is changing.

OCR for page 131
Page 155 Where the change in scarcity is occurring is where the opportunities and problems lie. The end of scarcity is a red herring. A few areas on which to focus research: • Information warfare: survivability of communications networks (civilian as much as military); institutions and policies for response to transnational terrorism and criminality (that uses or attacks information infrastructure); • Artificial agent economies: how to harness the efficiency, stability and robustness of competitive economies for real-time management and control of complex systems (electric grids, telecommunications networks, smart highways, spread-spectrum bandwidth allocation); and • Evaluation and social filtering: the economics of attention, trust, and reputation. Funding models for information and information services, and their effect on the creation and distribution of content.

OCR for page 131
Page 156 Electronic Interactions Paul Resnick AT&T Laboratories The Internet offers new opportunities both to support and to study interactions among people who do not know each other very well. I believe that recommendations, trust, reputations, and reciprocity will play important roles in such interactions and thus deserve attention from interdisciplinary research teams. There are interesting topics in all stages of commercial interactions, from search processes to negotiation to consummation of transactions: • Recommendations and referrals can help people to find interesting information and vendors. There is a need for continued research on techniques for gathering and processing recommendations (this is sometimes called collaborative filtering). Compilation of "grand challenge" data sets of recommendations would help this field advance. • The structure of negotiation protocols and the availability of information about past behavior of participants will affect the kinds of outcome that are possible. Economists have theoretical results regarding many simplified negotiation scenarios, but there is a need for interdisciplinary research to apply and extend these results to practical problems of protocol design. • Finally, in the transaction consummation phase, much effort has focused on secure payment systems. Some transactions, however, require a physical consummation (mailing of a product, for example) and hence must rely on trust in some form. Research can explore the role of reputations in creating trustworthy (though not completely secure) contract consummation. Such transactions may also have lower transaction costs than secure payment systems, even in the realm of purely electronic transactions. Noncommercial electronic interactions also offer many interesting opportunities. Electronically mediated interactions are visible and available for analysis in a way that face-to-face interactions typically are not. For example, "softbots" could scour the Web to create various graphs of relations between people and information resources. Social network theorists have already devised a number of techniques for analyzing such graphs. One possible application would be to hypothesize about and then analyze the credibility of information sources in

OCR for page 131
Page 157 various parts of a social network. Another possible application of network analysis would be to analyze the flow of reciprocity (or gift exchange, as Esther Dyson put it) and perhaps devise ways to increase a social network's level of reciprocity. In the last couple of years, I have become particularly interested in the concept of social capital, as articulated by James Coleman, Robert Putnam, and others. Social capital is a resource for action that inheres in the way a set of people interact with each other. I'm still struggling for various ways to connect this concept to specific research questions and projects. Some of the ideas above are born from those struggles, and I'd welcome any project ideas or new ways of thinking about these problems.

OCR for page 131
Page 158 Social Impact Of Information Technology Frank Stafford University of Michigan A great deal of attention has been given to new information technology as the main empirical force changing the wage structure and giving rise to wage inequality. Yet something on the order of skill-biased technical change is usually given no formal representation. The theory that could actually explain the changing wage structure is some type of unbalanced growth model. In fact the theory that could apply is not too hard to imagine. It is a closed economy "trade" model with "biased" technical change (Johnson and Stafford, 1998). Skilled and unskilled workers produce different goods. Suppose that there are three goods. Throughout, skilled workers produce Good A (professional services, most obviously), and less skilled workers produce Good C (including basic retailing). Initially, let us suppose that there is a large Good B sector, such as manufacturing and some other services, produced by less skilled workers. Then the new technology appears. It improves the ability of skilled workers to produce the Good B output, previously the domain of the less skilled workers. What in general will happen to the equilibrium when this skill-biased technological progress occurs? The average real wage will rise, but the skilled workers will get more than 100 percent of the benefit, implying that the real wage of less skilled workers will fall. In contrast, if the new technology had allowed the skilled workers to be more productive at their traditional specialty (Good A), then the real wage of all workers would have risen. A model of this simple sort would go a long way in organizing thought about some of the patterns reported in the literature on changing wage structure. Skilled workers have been substituted for less skilled workers in many Organisation for Economic Cooperation and Development manufacturing industries, for example. In that (Good B) industry there has been a rise in the ratio of nonproduction to production workers, and overall growth in manufacturing productivity has been strong. In contrast, high-skill service-sector (Good A) productivity growth has been generally slow. One need only think of higher education and legal services (and possibly medicine) as cases in point. The terms of trade within the domestic economy could be defined as the prices of goods produced by skilled workers and others. The price of the Good B sector has fallen because of biased technical change, and as additional less-skilled workers become available to produce more

OCR for page 131
Page 159 traditional Good C products such as retail services, they experience deteriorating terms of (internal) trade. For some countries with rather little trade, such as the United States, the closed-economy aspect of such a framework is most empirically relevant. For other countries, such as Japan, both trade and external as well as internal technological effects will be important to incorporate in an assessment of wage pressures. Consider the price of tuition and the price of routine health care assistance provided by home health care aides. Data from the Bureau of Labor Statistics wage series show the latter to have been falling below the level of inflation since 1973. On a more optimistic note, if the new technology can be applied to improve the productivity of skilled workers in their traditional domains, both skilled and unskilled workers would be better off. The new information technology is so far helping the nonmarket productivity of skilled workers: use of the Internet will be providing a huge array of services via the household sector. Data available to study this aspect of technical change are close to nonexistent. The real standard of living may come to depend more on the nonmarket sector. We have developed a methodology for studying the value of nonmarket output though the use of time-use diary data, based on a grant from the National Science Foundation in the mid 1970s and early 1980s. We are currently studying the access of children under the age of 12 to information technology with time-use diaries both in the home and in schools. The data are being collected as a special supplement to the Panel Study of Income Dynamics, funded by the National Institute of Child Health and Human Development. Copies of our instruments are available at <http://www.umich.edu/˜psid/>.

OCR for page 131
Page 160 The Uncalming Effects Of Digital Technology Mark Weiser Xerox Palo Alto Research Center The important waves of technological change are those that fundamentally alter the place of technology in our lives. What matters is not technology itself, but its relationship to us. In the past 50 years of computation there have been two great trends in this relationship: the mainframe relationship and the PC relationship. Today the Internet is carrying us through an era of widespread distributed computing toward one of ubiquitous computing, characterized by deeply imbedding computation throughout the world. Ubiquitous computing will require a new approach to fitting technology to our lives, an approach we call "calm computing." Calm computing is not a natural result of increased use of technology—in fact unbridled digital technology naturally decreases calm. Imagine the following experiment; or if you are brave, try it. Find two empty cardboard toilet paper tubes, and tape them over your eyes so that you are looking out through them. You now have no view up, down, left, or right, only a narrow cone of view straight in front. Now walk. What happens? You have lost the flow of information from the periphery into the center, and have only the center. Everything that you see is a surprise, because it just pops in without warning. Your head must constantly swivel or you will trip, run into things, miss people passing you, and generally bumble. If you wear toilet paper tubes for a few hours you will feel exhausted and highly anxious. Your head will have been constantly swiveling to try to partially compensate for the lack of peripheral vision. You will feel overloaded with all the work you did to keep up with your world. You will be emotionally drained by all the surprises when things popped into view and when you had to compensate for the unexpected. Wearing toilet paper tubes is like living in the digital age, where the feeling of exhaustion is called "information overload." Digital technology, like toilet paper tubes, tends to deliver information with a set of biases. These biases push us toward the center of our awareness and tend to leave out the essential periphery that helps us make sense of and anticipate the world around us. More and more of the economy and business and life are mediated through digital technology. If we lose the periphery, we may be smarter about whatever is right in front

OCR for page 131
Page 161 of us, but stupid to the point of ignorance about what is nearby but out of sight behind the toilet paper tube. Proper action has always meant keeping the periphery and center in balance. The center is the domain of conscious, symbolic thought and action. The periphery is the domain of flow, of context, of intuition, and of understanding. The center is the domain of explicit knowledge of what to do, the periphery the domain of knowing how to do it. Take away either of these and near paralysis results. There are 10 biases in today's digital technology that contribute to unbalancing center and periphery. These are saying, homogenizing, stripping, reframing, mono-sensing, deflowing, defamiliarizing, "uglying," reifying, and destabilizing. 1. Saying names the tendency of digital technology to make everything explicit. 2. Homogenizing is the delivery of digital information at an ASCII monotone that puts all information into the same pigeonhole. 3. Stripping is the loss of social context and frame that frequently comes with digital transmission. 4. Reframing results because there is always a social context and frame, and after stripping, a confusing or illegitimate context may result. 5. Mono-sensing is the emphasis on the eye over all other senses, reducing our inputs, our style, and our intelligence. 6. Deflowing is the loss of the context that lets us enter the "flow state" of greatest intelligence and creativity, and so reduces our anticipation and history. 7. Defamiliarizing is the loss of familiar social practices as we try to work and live on the net. 8. "Uglying" names, with an ugly word, the uncomfortable feeling with which the low state of design in digital technology leaves us. 9. Reifying results when implicit practices are cast in stone, removing the white space that lets anything work, as when a company puts all its processes online. 10. Destabilized is our emotional state after buffeting from all the above. The above add up to a bias toward the center, and away from the periphery. Understanding the power of balance between focus and periphery, and caring about both, can be a tremendous source of advantage in the digital age. Digital technology, through its homogeneous, ubiquitous, and voluminous provision of information, can enable an even richer periphery for action. The danger comes if we believe that only focus is effective. Trying to focus on the increasing volume of bits can overwhelm us, and we can badly misuse our full intelligence by ignoring attunement, community, and peripheral awareness. The opportunity for focus is greater than ever before, but only if we recognize that there is no focus without periphery, there is no center without a surround. If we can stay in balance, we can expect a world of greater satisfaction and effectiveness.

OCR for page 131