One of the most common findings in prior studies of information technology's (IT's) impact has been that outcomes are far from uniform across all settings and contexts. In earlier years we looked for the impact of IT on, say, organizational centralization, and scholars tended to hew to one end or the other of a bipolar spectrum: centralization versus decentralization, upskilling or deskilling, job destroying versus job creating. What scholars found, in almost every case, was that this was an unproductive way to conceptualize the issue. One almost always found evidence of both extremes of outcomes or impacts as well as many points in between (see Attewell and Rule, 1989). We finally realized that we were asking the wrong question. We should have asked, In what contexts does outcome A typically predominate, and in what contexts does outcome B tend to prevail, and when does one see A and B in equal measure?
We found that a technology does not usually have an impact. The context or setting in which the same technology is used often produces strikingly different "impacts." This phenomenon has been discussed in terms of "Web models" (Kling), or "structural contingency theory" (Attewell) or Robey's ''Plus Ca Change" model. All imply that we fully appreciate the role of context in technology outcomes and that we therefore expend sufficient research effort to measure the context, and to delineate its interactions with the technology. If we fail to do this, we return to the old "black box" paradigm, that is, attempting to measure only the input (say, a particular software program) and the outcome (say, kids' test scores) without bothering with the context (the classroom, the kids' family backgrounds) or the causal mechanisms.
Black box research on impacts often discovered "inconsistent" outcomes across studies but proved unable to show why there was so much variation, because it neglected to measure the contextual variables that were moderating the effects of the input on the output. For example, the old paradigm would phrase a research question so as to ask whether or not home PCs would improve kids' school performance. In contrast, research within the current contextual paradigm would ask under what conditions having PCs at home affects students' school outcomes. A piece of my own work has indicated, for example, that having a home PC currently has a minimal effect on the school performance scores of poor and minority kids but is associated with substantial positive effects on the school performance of kids with high socioeconomic status (SES), when other factors are controlled for (Attewell and Battle, 1997). Race and class/SES, in this example, prove to be very important contextual features moderating the impact of home PCs on school performance.
It is important to understand that because of the last three decades of research and the importance of context as discussed above, many distinguished scholars of technology avoid the term "technology impact." Using this term in framing the question would be viewed by some of them as indicating an ignorance of the body of scholarship in technology studies. For them, the term "impact" connotes a kind of technological determinism that is very dated and widely discredited. Personally, I am not so averse to the term "impact," but I do agree with their larger point about avoiding models based on simple technological determinism.
Paul Attewell, "Research on Information Technology Impacts"
(see Appendix B of this volume)