Panel IV
Peripherals: Current Technology Trends, continued

INTRODUCTION

Michael Borrus

The Petkevich Group

Characterizing the meeting as “phenomenally interesting,” Mr. Borrus said the day’s presentations had made clear that the difficulty of measuring productivity growth in computing arose at least in part from the fact that value and functionality were constantly shifting. NVIDIA, for example, was trying to capture more of the value added by migrating software functionality to its chips; component manufacturers were trying to migrate system functionality into the component by adding processing capability to displays, to magnetic-storage components or to networking components; and Veritas was trying to migrate network-or system-management functionality down to the software. An inherently difficult technical measurement problem was thus exacerbated because “who’s capturing the value keeps moving around based on changes in business strategy and the ability to execute.”

The current panel would offer two more interesting and diverse examples. The first of these, optical-storage technology, had not displayed a pace of technical advance equal to that of magnetic or solid-state storage, with the result that its applications were defined and, in some sense, limited. A question to keep in mind



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 71
Deconstructing the Computer: Report of a Symposium Panel IV Peripherals: Current Technology Trends, continued INTRODUCTION Michael Borrus The Petkevich Group Characterizing the meeting as “phenomenally interesting,” Mr. Borrus said the day’s presentations had made clear that the difficulty of measuring productivity growth in computing arose at least in part from the fact that value and functionality were constantly shifting. NVIDIA, for example, was trying to capture more of the value added by migrating software functionality to its chips; component manufacturers were trying to migrate system functionality into the component by adding processing capability to displays, to magnetic-storage components or to networking components; and Veritas was trying to migrate network-or system-management functionality down to the software. An inherently difficult technical measurement problem was thus exacerbated because “who’s capturing the value keeps moving around based on changes in business strategy and the ability to execute.” The current panel would offer two more interesting and diverse examples. The first of these, optical-storage technology, had not displayed a pace of technical advance equal to that of magnetic or solid-state storage, with the result that its applications were defined and, in some sense, limited. A question to keep in mind

OCR for page 71
Deconstructing the Computer: Report of a Symposium during this presentation would be where new applications and market growth might come from. In contrast, the technology of laser and ink-jet presentation seemed almost infinite in its potential applications—particularly on the ink-jet side, where the printing of words on paper had led, somewhat surprisingly, to the spraying of genetic material onto gene chips and to the manufacture of flexible plastic circuitry and organic semiconductors. The first speaker, Ken Walker, was the veteran of start-ups in Silicon Valley and elsewhere and had until shortly before held the post of Vice President for Technology Strategy at Philips Electronics. CD/DVD: READERS AND WRITERS Kenneth E. Walker Despite being “one of the casualties” of Silicon Valley’s recent downturn, Mr. Walker said he was “still bullish on the future.” He proposed a quick review of the state of the art to begin his talk on developments in optical storage, which he described as an established business that was not so much technology-driven as operationally driven. Pursuing the theme of the migration of technology, he noted that value was moving away from the creation and sale of drives and into the integrated circuits necessary to create drives and read the data, as well as into the optical pickup unit: that combination of a solid-state laser and plastic lensing that reads the optical device. While DVD and CD readers had become standard on PCs, certain limits in those devices’ capabilities were starting to be reached. Top-of-the-line CD devices, available from mass-market appliance and CD vendors, were rated at 48X to 52X (or 48 to 52 times the speed of the original audio compact disc), the equivalent of around 200 km/hour—a speed approaching the reigning physical limit for CDs, since operating at higher speeds would cause the disc to shred within the drive. But if physics had placed a wall before the industry, a new set of capabilities had come along in extensions to the rewriteable CD-RW referred to as “Mt. Ranier” or CD-MRW. With Windows XP and a CD-MRW drive, Mr. Walker explained, it was no longer necessary to erase everything on the disc in order to add something to it; instead, material could be dragged on and off. Predicting that the optical device’s future would be as “the next floppy,” he remarked that after a long tenure the floppy was dead—PCs were being shipped without them—and the CD was taking its place. At the same time, however, a battle was taking shape between read/write and rewrite. Drives labeled DVD−RW and drives labeled DVD+RW were being made according to very different standards of rewrite-ability. In DVD−RW, a rerecordable format, a disc could be used a thousand times, but adding anything required erasing it and rerecording. In DVD+RW, which resembled CD-MRW, additions could be made incrementally and sequential erasures were possible, with whole segments able to be erased and reused.

OCR for page 71
Deconstructing the Computer: Report of a Symposium Finally, the next generation of DVD, based on a blue rather than a red laser, had been shown over the previous two years in a consumer electronics perspective. To understand the conditions prevailing in the electronics industry in a way that would further a discussion of the deconstruction of the computer, Mr. Walker claimed, it was imperative to consider the “fight back and forth” that had taken place over the years between consumer electronics and compute electronics. Many features of computer monitors, from size to resolution to aspect ratio, had been shaped by the early use of television screens as monitors; also playing a role was the fact that computer firms were obliged to deal with the same manufacturers that produced TV tubes, since glass handling was the part of the process that required the greatest capital expenditure. The impact of consumer electronics on optical storage could be seen in laser discs. In the 1970s, when the volume of computers was basically nil and the volume of televisions quite high, a great deal of work was done toward creating a video laser player for a long-playing disc designed to rival the LP. Digital technology began to come into consideration around the middle of the decade; the original video discs, even though they were laser based, used analog technology, as did the original audio laser player, or ALP. Philips spent two years looking at an analog long-playing disc; thinking quadraphonics was going to be big, the company was seeking a large-size disc with a capacity of two hours. When it saw, however, that quadraphonics wasn’t going to work, Philips opted for stereo, which could be accommodated on a compact disc. The first public demonstration of an 11.5-cm disc took place in 1979, two years after the idea was adopted. Considering that the development covered most of the period 1970–1979, Mr. Walker stated, change from an R&D perspective was not that fast. In 1980 Philips went to Japan and approached all the Japanese electronics companies. Sony, the only one that thought there was any real promise in the CD, joined with Philips and, together, they created a standard called “Red Book.” When the first CD products came into the market in 1983, the two firms talked it up and managed to convince all the other firms to jump in and convert their catalogues. And because the standard belonged to Philips and Sony—the size of the disc was copyrighted, the name “CD” was copyrighted—everyone in Japan had to come to them to license the intellectual property, and they made a lot of money. The standard was set for CD audio in 1980 and product reached the market in 1983, the same year that the first data version, the CD-ROM, appeared. The recordable CD for computers, the CD-R, followed in 1993; the CD-RW in 1997; and the Mt. Rainier, which allowed the CD to be treated more like a floppy disk, in 2002. The other Japanese companies’ conclusion from their experience with the CD—that they should never again allow themselves to be held hostage over standards—led to a certain fractiousness at the appearance of the DVD. To avoid being left out in the cold, the firms joined with one another in a consortium, the DVD Forum, to which each contributed intellectual property; license fees from

OCR for page 71
Deconstructing the Computer: Report of a Symposium non members were to be divided up by members in proportion to their contributions. So that “everybody got to play,” Mr. Walker recalled, a wide variety of standards, “none of which actually worked the same,” was created: There was a separate standard for video, for data, for rewritable data, for audio, and for recordable and rewritable data. Each was promoted by different companies and different groups, hugely increasing the problems of compatibility. That the different devices were variations of what was all fundamentally digital technology reflects an American view that “did not fit the Japanese view of how consumers use products,” he said. The Japanese firms resisted pressure from U.S. PC makers to treat everything uniformly as data until the end of the 1990s, when they realized that DVDs for PCs had outsold consumer DVDs by a 5 to 1 margin. The DVD Forum, having at last seen the light, in November 2002 released a draft standard for what was being called the “DVD multiformat,” which corresponded to a DVD multidrive that was to read and write all DVD standards. The rapid pace of change in CD and DVD technology “gets in the way of our being able to do a nice, clear curve of price-performance improvements,” Mr. Walker stated. The decline in price of an audio CD player between introduction in 1983 and 2003 had been dramatic; the rapid drop in price that followed the advent of the CD-ROM in the late 1980s had occurred simultaneously with an increase in capability. The 12X CD had been in the market less than six months when it was displaced by the 16X, a phenomenon that, considering the price of R&D and of tooling up, made it a challenge for producers to break even. As the technologies advanced, with some superseding others, speed became harder to compare: The CD-R had a dual specification, speed to write vs. speed to read, while the CD-RW’s, read/write/read, was tripartite. “CD-Rs can read CD-ROMs, CD-RWs can do CD-Rs and CDs,” he remarked. “That’s why it’s difficult to create a single bar or graph that says, ‘This is how one technology has performed versus price and another.’” Turning to future computing standards for optical storage, Mr. Walker predicted that the CD-RW would replace the floppy at the low end; that rewrite speeds would reach an upper limit of 48X–52X with standard media, although special media might achieve more; and that the DVD+RW—which was supported not only by its creator, Philips, but by Sony, HP-Compaq, Dell, Microsoft, and IBM—would prove the winner over Pioneer’s DVD-RW, even though the DVD Forum was pushing the latter. Sony, meanwhile, was following its usual practice of hedging its bets with a combination device compatible with both CD+RW and CD-RW. The rewrite speed limit for the DVD matched that for the CD, around 200 km per second, but it was designated as only around 16X because the much higher density of DVD data yielded a higher data rate at that same speed. Another source of technological development was the quest for copy protection on the part of the recording and motion-picture industries. One promising option was the embedding of chips with copy-protection identification in a CD so that it could not be rerecorded by home users. “If you buy a copy of Microsoft

OCR for page 71
Deconstructing the Computer: Report of a Symposium Word, the error-checking card will have to be read by the driver,” Mr. Walker explained, adding: “It will be a mess, but the paranoia that’s coming out of Hollywood is going to drive a lot of things around embedded copy protection.” The big technological shift, however, would be the advent of the blue laser for DVD, whose wavelength would allow a more tightly focused beam and thus offer superior density; it would operate with a 0.5-micron gap, compared to a 1.6-micron gap for a CD and a 0.72-micron gap for a DVD. This would make possible a huge increase in storage capacity from the 4.7 gigabytes that then represented the standard side of a single-sided, single-layered DVD. Although up to 22 gigabytes could already be stored on a double-layered, double-sided DVD, the appropriate comparison would be to the 25–27 gigabytes possible using a blue-laser-based platter of the same size. Again the consumer side was driving the computer side: in this case, demand for DVD products compatible with high-definition television. A fight was also brewing over this technology, however, as Microsoft and Warner Brothers were arguing for better compression using existing DVD media and red-laser devices. According to Mr. Walker, Microsoft wanted to own the codec because, if it was Windows-Media based, Microsoft would get a royalty on every DVD player sold. Warner’s interest was in reselling its catalogue: Blue-laser technology was being set up to be inherently recordable as well as readable, but if the winning technology were not recordable, the company would be in a position to resell its entire catalogue to anyone acquiring an HD set. The blue-laser disc could be expected to operate on a single rewrite standard from the start, a consequence of the lessons learned from the DVD experience. But variation might come in the size of the disc: Companies had been looking at “small form-factor optical,” a 3-cm disc using a blue laser in a drive about the size of a matchbook that could accommodate a removable piece of optical storage holding a gigabyte. Makers of advanced phones and digital cameras had shown an interest in the technology, possibly as an alternative to IBM’s microdrive. Beyond the blue laser not much technological development was in store for optical storage’s next decade, in Mr. Walker’s personal view. Recalling that it had taken 20 years to make “simple” red lasers function at their current level, he pointed out that only non-visible light in the form of ultraviolets would offer any significant advantage over the blue laser. The relatively low price that users were willing to pay for optical-storage devices and the difficulty of mass production would combine to limit their commercial potential. In addition, there were problems inherent in the technology itself: At some wavelengths ultraviolet lasers are no longer reflectable in standard mirrors but burn right through them, complicating the construction of a device. Returning to Mr. Borrus’s point concerning the shifting of functionality, Mr. Walker ventured that competing technologies were likely to take on some of the roles thought to belong to optical storage. He called Mr. Whitmore’s presentation on magnetic storage “compelling” and noted that he himself used a 1394-based hard drive to back his home computer systems rather than optical drives—even

OCR for page 71
Deconstructing the Computer: Report of a Symposium though he owned DVD burners—because it was “just easier and simpler than feeding disks into a system.” He called solid-state, static RAM “great” for use with the USB dongels that had become popular for transferring files and rated as “fascinating” the new magnetic RAM technology: “Like standard RAM, it doesn’t need any energy to maintain its state, but it’s also extremely fast. So you can turn your computer off, walk away, leave it away from power for a week, come back, plug it in, and everything is right where it was before—there’s no wait for the state to be written out of the hard drive” as with some laptops. Pervasive networking, which like optical storage could be used to share files and to move them back and forth, was another source of competition. “If the network is fast enough, and I can get to my music from my hard disk at home through my cell phone, do I need to carry an MP-3 player with me?” he asked. Looking to the future, he stressed that a fight was in the offing over whether red or blue lasers would be used for high definition. DVD and CD reader/writers would be pervasive in mass-produced distribution, as the cost of making mass quantities of discs was in the tenths of a cent. CDs also held value for those who didn’t want to be concerned about electromagnetic pulses taking out their content and those who created content on their computers that they wanted to play on a home stereo, although this advantage might be attenuated with the advance of pervasive networking, whereby everything in someone’s consumer electronics deck would be connected to his or her computer. Continued Asian control of the optical storage business was to be expected. There would be limited R&D beyond the blue-laser technology, with royalty revenue models decreasing and the price steadily eroding. The feature set would become more consolidated: “We’re going to end up with a single drive that can read and write CDs and read and write DVDs,” Mr. Walker declared. He saw the questions determining which media consumers use and why as “How big a piece of optical do you want to hand around to somebody?” and “What quality of video are you going to want to play on the TV set?” Prominent among the lessons learned from the experience of the previous two decades or so was that a shift had taken place in the value equation. At the industry’s inception, it was the drive manufacturer who had the power and collected the profit. But building a drive was no longer challenging because the availability of subcomponents was such that it had become quite easy to assemble them into a drive. “If you are a drive manufacturer, your focus must be on execution and assembly,” Mr. Walker observed. “And if your focus is on execution and assembly, and not on worrying about the integration, these people start forward-costing.” The cost of a new generation of CD or DVD then works out no longer to have the premium for a new speed that was earlier associated with a new technology. As a result, he said, integrated companies pursuing technology innovation were no longer able to recoup their investment in R&D. Value was now coming from the intellectual property: that is, from the chips. Philips, he noted, had helped itself by having its semiconductor group make chips; but the semiconductor

OCR for page 71
Deconstructing the Computer: Report of a Symposium group, in line with its obligation to make money, started selling chips to Philips’s competitors—“and, well, there goes that business.” In conclusion, Mr. Walker noted the main reasons behind the fact that the optical-storage business had what he called “a very Asian center of gravity.” When the CD came out, he recounted, the manufacturer and designer mechanisms were jumped on by Japanese companies, while “almost no one” in the United States and only Thomson and Philips in Europe responded. The commitment made by Japanese manufacturers meant that the component makers all set up shop in Japan, with the result that their R&D was located there as well. By the time the PC started to use optical-storage devices, a large number of PCs were being built and designed in Taiwan, where the government encouraged PC manufacturers in their desire to move into the sector. Taiwan’s Industrial Technology Research Institute (ITRI) spent liberally on technology-development programs, enabling the formation of companies like Lite-On and Media Tek; this made for a synergy of location, he noted, as the two firms “happened to be right across the street from each other.” Lite-On subsequently became one of the leading providers of optical drives and devices, Media Tek the leading chip provider in Taiwan. Drive manufacture had been moving to China; with the customers, the PC and consumer electronics companies, having moved or moving their manufacture of PCs and DVD players to China, the component providers were moving there as well. Mr. Borrus then introduced as the next speaker Howard Taub, Vice President and Director of the Printing and Imaging Research Center at HP Labs. Dr. Taub, a member of the core group that had managed the invention of thermal ink-jet technology at Hewlett-Packard, was the holder of a great number of patents in that and related areas. LASER AND INK-JET PRINTERS Howard Taub Hewlett-Packard Labs In prefatory remarks, Dr. Taub said he would go beyond the title of his presentation to speak about a third printing technology and about digital publishing, fields he held to be of great interest at that moment. And, embracing Dr. McQueeney’s earlier statement, he asserted that it was no longer possible to talk exclusively about technology, but that technology, infrastructure, and business issues had to be considered together. Showing a chart that indicated the actual and projected breakdown of market share among various printer technologies from 1998 through 2006, Dr. Taub noted that only a thin line represented color page printers and opined that, despite appearances, the workplace had not made the transition to color laser printers (see Figure 20). Market share for laser printers, designated as “monochrome page

OCR for page 71
Deconstructing the Computer: Report of a Symposium FIGURE 20 Why focus on ink jet and laser? Shipment projections of various printer technologies. SOURCE: Lyra Research, Inc. printers,” was fairly substantial, but it was obvious that the ink jet had held or was expected to hold a consistent two-thirds of the total printer market over the entire period of the chart. In addition, ink-jet technology was used most often in multi-function printer peripherals, which as a category had passed monochrome laser printers to move into second place sometime in 2001. Sales of thermal printers and wide-format plotters were not large enough even to show up on the chart, while those of impact printers were visible but dwindling into insignificance. Among a score of ink-jet technologies on the market, there were two clear leaders, piezoelectric and thermal; piezo, around since the middle of the twentieth century, was the less popular of the two technologies. “Laser printer” was a misleading term because some of these machines wrote using LED arrays rather than lasers. While this class of printer was best characterized as “dry-toner electrophotography”—with electrophotography, or EP, designating the kind of printing—Dr. Taub himself thought of the laser printer as being defined by the fact it used a toner particle of around 5 microns or more in size. A significantly different technology although it was also based on electrophotography, “liquid-toner EP” used toner containing particles that were much smaller than the particles usually used in a laser printer—of submicron size instead of 5 microns—which afforded capa-

OCR for page 71
Deconstructing the Computer: Report of a Symposium bilities in terms of quality and speed that were hard to achieve with a dry-toner printer. He displayed a chart that located ink-jet, liquid EP, and laser printing along axes corresponding to the parameters “faster,” “better,” and “cheaper,” which Dr. McQueeney had stated to be de rigeur in any forward-looking assessment of technology (see Figure 21). Recalling his first work in ink-jet printing, on IBM’s 6640 in the 1970s, Dr. Taub said that only $5,000 to $10,000 of the value of that 92-character-per-second, $30,000 machine was accounted for by the printer itself, with the rest residing in its massive paper-handling mechanisms. In the 1980s, HP had come out with its first ink-jet printer, the ThinkJet, which had a price of $500 but was a special paper printer and was not letter quality; Dr. Taub had managed the research project at HP Labs that invented the technology. It was not until around 1987 that “the real bigger winner,” the DeskJet, made its appearance. A black-and-white, plain-paper machine printing 300 dots per inch, it cost $1,000; shortly afterward, the company was able to offer a color version at around the same price. In 2003, $49 would buy a printer similar technologically but of much better quality, able to print out photos that were “almost as good as good-quality Kodak photographs,” he said. Although he suggested that a corollary to Moore’s Law could be imagined for printers, given the foregoing evidence of quality improvement accompanied by price decrease, Dr. Taub declined to predict whether they would indeed get FIGURE 21 Printing technologies.

OCR for page 71
Deconstructing the Computer: Report of a Symposium cheaper. The printer was sold on a “supplies” business model like that of the razor-blade business. “We call [printers] ‘sockets,’” he said, and “just want to get as many out there as we possibly can and then sell supplies for them.” Some were already given away: There were Lexmark units available for $29, with the cartridge costing more than the printer. As to the “faster” parameter, an ink-jet printer could be made to go faster simply by adding more nozzles. The meaning of “better” tended to center on resolution, although only some of that improvement had been of real significance, the rest being the result of a competitive “game.” After moving early on from 92 to 300 dots per inch (dpi), HP put out a 600-dpi printer, then increased that to 1,200 dpi. HP stated at that point that higher resolution would not constitute true improvement, but its market share began to drop as competitors went to 2,400 dpi, which it then matched and later bettered at 4,800 dpi. But pushing resolution any further would make “no sense at all,” he said, because what is “really critical about the quality of the print is the size of the spot that you’re putting on the paper—and if you can’t make that smaller, then going to higher and higher resolutions doesn’t mean anything. It’s more marketing than anything else.” The spot size had in fact gone down considerably over the years, from 240 picoliters for the ThinkJet through about 100 picoliters for the DeskJet and finally to about 3 or 4 picoliters, a size approaching the point at which the eye can no longer see an individual spot on the page. And photographic printers had been using extra inks to dilute the cyan and magenta such that the “final little spot” had become invisible. “We are pretty much at a point where the quality of the image that you can print is about as good as you’re going to get,” Dr. Taub stated. “There’s probably a little bit more you can squeeze out of it, but not very much more.” The notion of “better,” therefore, had gone on to acquire other dimensions, encompassing more features than resolution: connectivity, in the shape of the ability to plug a card into a photographic printer and print pictures out; ease of use; and industrial design. Comparing the attributes of the main printer technologies, Dr. Taub called the ink jet “cheaper and better,” capable of providing high-quality images and text very inexpensively. Faster and more expensive ink jets did exist—for example, Scitex made a $3.5 million ink jet for print books and high-speed graphics—but the bulk of ink-jet devices were not at that level of performance. He characterized the laser printer as “faster and cheaper,” while the liquid-toner EP, used more for commercial printing, got the designation “better and faster.” Each thus had its own niche: Ink jet, delivering high-quality and color at low cost but somewhat slowly, fit well into the home market as well as being the technology of choice for large-format plotters; the laser, leader in the office, was in production printing as well; and liquid EP was the right device for offset-quality commercial printing. Describing the printers’ technologies, Dr. Taub said the piezo ink-jet printer had a chamber full of ink with a small piezo crystal on it; a voltage pulse could be

OCR for page 71
Deconstructing the Computer: Report of a Symposium applied to the crystal at the rate of 10,000–20,000 times per second, causing a deformation that in turn squeezed the chamber and drove an ink drop out of it. The thermal ink jet worked on a similar principle, but instead of a crystal on the outside of the chamber it had a resistor inside. Applying a pulse of current to the resistor drove the temperature of a thin layer of ink above the resistor up to 400 degrees Celsius, which in a matter of microseconds created a bubble over the resistor that pushed on the ink in the chamber and forced a droplet out. In going up to 400 degrees in a matter of a microsecond, the heat flux from the resistor was comparable to that at the surface of the sun; for that one microsecond, a one-square-meter heating element could actually consume the energy produced by a medium-sized nuclear power plant. Dr. Taub began his explanation of the laser or dry-toner EP printer by describing a drum that was continuously being cleaned as it rotated and then had a uniform-layer charge put down onto it by a charge roller. A laser scanned back and forth across the drum, turning on and off as it did and discharging areas of the drum that it passed over when it was on. In this way, it added the information to be printed; if the charges were visible, they would reveal a pattern made up of charges that resembled the text and images to be printed. At a toning station, toner was thrown at the charge patterns, sticking where the drum was charged and not sticking where it was not charged. The image pattern formed by the toner on the drum was then transferred to the paper, which passed through a fuser roller that, employing heat, fused the image very robustly onto the paper. The liquid-toner EP printer works basically like a laser printer, using a laser to create a charge pattern on a photoconductive drum. This pattern is then “developed” using the liquid toners to create a toner image on the photoconductor. However, while the dry toner laser printer transferred its toner directly to the paper, the liquid-toner printer transferred the toner to a soft-rubber roller which in turn transferred the toner to the paper. This process, which uses an intermediate transfer to a rubber roller, is similar to the process in offset printing where the ink is transferred to a rubber “blanket” before the final transfer to the paper. As a consequence, the liquid-toner EP printer shared with the offset press the capability of printing on very irregular surfaces. Moving on to the technologies’ attributes and issues, Dr. Taub characterized ink jet as a simple and highly scalable process. He posited that a one-nozzle ink-jet printer could produce a very inexpensive, very low-performance printing solution; at the high end, $3 million ink-jet printers were capable of printing newspapers. Over this enormous range, printing nozzles could be added to increase speed and color capability as desired. Ink-jet printers costing $100, $1,000, and $20,000 shared the same fundamental technology. One of the principal challenges facing the technology was improving image durability, with respect to both light-fastness and water-fastness. The former, after a number of years of work, had largely been solved: Durability under normal use, represented by a framed document on an office wall, was being quoted at around 17 years, rising to 70 years or more if

OCR for page 71
Deconstructing the Computer: Report of a Symposium the document were placed in an album or otherwise kept out of the light. Water-fastness continued to need work, as in some cases inks would run if they got wet. Another challenge, particularly when a large number of nozzles was involved, was dealing with what happened if a nozzle went out. Redundancy and maintenance systems offered solutions to this problem, although the increased complexity that accompanied the move to larger systems could “make things a little Rube-Goldbergish,” Dr. Taub acknowledged. Not only reliability but also drying speed became an issue with more complex systems because the amount of fluid on the paper that needed to dry quickly increased. Laser printers had achieved excellent text quality, and they were starting to provide good image quality as well. High-speed printing; durable print, both water-fast and light-fast; and relatively low operator intervention were also positive attributes. Among the challenges outstanding were that the gloss of the printed material was shiny in contrast to that of the paper, whose gloss is dull. Because in offset printing the printed areas and the nonprinted areas had the same gloss to them, no one in the trade would have mistaken laser-printed copy for offset copy. In fact, with most dry-powder toner printers the layer of toner tended to be 5–15 microns thick as compared to 1 micron for offset printers; this not only drove up the operating cost, it left a very glossy layer no matter what the paper looked like. Liquid-toner systems afforded the highest text and image quality, and the gloss of the paper matched the gloss of the print because the layers were at 1 micron, as in offset printing. Because the toners were trapped in the liquid, processing speed could be cranked up beyond the capabilities of dry toner. Dr. Taub showed sample images from HP dry-toner and liquid-toner printers for the sake of comparison; he pointed out that the dry-powder toner produced fuzzier print because of its larger toner particles. Comparing color spots produced by a liquid-toner printer against color spots from an offset press—such spots being a basic tool used for judging print quality—these produced very similar results. As to problems and challenges, the liquid-toner systems were easier to use than a printing press but more complicated than something like a Xerox DocuTech, and they required more highly trained operators than the latter. Also a question was whether this technology was suitable for the office environment, since it involved solvents and, therefore, containment systems for the solvents. And as it incorporated a cooling system, it was not only power-hungry but could not simply be plugged into a standard outlet, requiring instead a special power source. But besides having the capability of providing quality indistinguishable from that of an offset press, this technology was digital—something, Dr. Taub observed, that “really changes things significantly.” Looking back a few years, to the late 1990s, at the distribution of work load in the printing market, he cited figures on the order of several hundred billion pages per year each in office printing and office duplication, virtually all of them monochrome; in contrast, the commercial printing and commercial publishing markets reached 3 trillion and 8.3 trillion pages, respectively, with higher-value color accounting for 50 percent

OCR for page 71
Deconstructing the Computer: Report of a Symposium of the former and 90 percent of the latter. HP’s business was “3 or 4 percent of the total number of pages being printed,” he remarked, noting that even if his company had 50 percent of that market and it represented $20 billion annually, it was “still only 3 or 4 percent.” Growth, therefore, meant looking into other areas of printing, particularly as color, absent from the office printing and duplicating market, commanded higher prices. HP responded by paying around $1 billion for an Israeli company that made a liquid-toner printer, Indigo, to jump into the commercial printing business rather than waiting for ink-jet technology to develop to the point that the company could enter using in-house technology. This move to hedge its bets had ruffled some feathers at HP, the market leader in both ink-jet and laser printing; the ink-jet division in particular thought itself capable of competing with any technology, including the one being acquired. Dr. Taub said the Indigo Digital Press represented the best of offset litho, thanks to fast, high-quality printing and flexibility with media; and the best of laser technology, being capable of short-run, fast-turnaround printing and, in fact, “everything you could want from a digital press.” To illustrate the future of his sector, Dr. Taub displayed a magazine created for a children’s party as an example of the custom publication that can be produced using the Indigo press. Each girl’s copy had her own photo on the cover, and half of the magazine was made up of poses of that one girl, the other half being one picture each of the other girls at the party, all taken by a professional photographer engaged for the occasion. A digital press made such a product possible: Other than on the Indigo press, the only way it could even have been approximated would have been using prints of the photographs themselves, which would have been extremely expensive. This capability opened the door to such unprecedented applications as home-delivered commercial magazines and direct-mail advertising whose content was tailored to the interests and preferences of the individual recipient, something that would save time for the reader and money for the sender. Physical inventory could also be reduced: The setup of a print run on an offset press was so costly that there was a tendency to print excess copies in order to avoid the expense of setting up a second run. Reports placed at between 40 and 60 percent the amount of offset-printed material—in the form of books, magazines, marketing literature, and most likely newspapers as well—that was discarded, while a significant quantity of the remainder needed to be stored in warehouses. Although admitting its source was unclear, Dr. Taub cited an estimate that by the end of 2006 over 50 percent of marketing documents would be printed digitally. To illustrate the possibilities this would open up, he displayed a brochure from an auto distributor in California that had been delivered to an HP colleague within several days of the latter’s having filled out an information request card and mailed it in. The brochure pictured a car of the model and color that he had indicated he was interested in and provided, in addition to detailed data on the car itself, a comparison of its characteristics with those of a directly competing model.

OCR for page 71
Deconstructing the Computer: Report of a Symposium The addressee’s name was embedded throughout the brochure’s text, which was customized to the point of including the name and contact information of a local sales representative, so that the brochure appeared to come from a particular dealership even if it was in fact generated as part of a national marketing campaign. In addition, Dr. Taub pointed out, the brochure incorporated a method of measuring the effectiveness of the campaign: the offer to the addressee of a free oil change for bringing the brochure along when visiting the dealership. “Imagine if, having expressed interest in this particular car, you got something in the mail that looked like it was made for you in response to something that you had inquired about specifically,” he said, adding: “Very powerful.” To corroborate this observation, he cited a study on direct-mail response rates by Frank Romano and Dave Broudy of the Rochester Institute of Technology showing that using the addressee’s name to personalize a black-and-white brochure increased response 44 percent, from a typical level of around 3 percent to between 4.5 and 5 percent. A similar increase in response rate occurred when a full-color brochure was sent in place of a monochrome brochure but neither was personalized. A full-color brochure with the addressee’s name added produced a response rate that was 135 percent higher than the nonpersonalized monochrome brochure had achieved, and using additional data to customize the brochure so that it resembled that received by Dr. Taub’s colleague yielded a 500 percent increase in response over the baseline. Displaying a chart summing up his points, Dr. Taub commented that at the lowest level of sophistication—one-to-many marketing in which there is no customization—a major advantage of digital publishing was the ability to print on demand (see Figure 22). This eliminated waste in two ways: by making it economical to print only as many copies as needed at any particular moment, and by making it possible to update or correct content between the resulting shorter print runs. The next level up, that of customizing or “versionizing,” was that at which a publication’s content was tailored to a city or dealer in order to obtain a result more relevant to a specific customer set while lowering distribution costs and improving timeliness. The level above that, one-to-one marketing, corresponded to the auto brochure described earlier. At the highest level, that of “event-driven” publications, a dealer could have a personalized brochure addressing an individual customer’s concerns prepared while that customer was inspecting product on the dealer’s premises and present that brochure to the customer at the end of the visit. Dr. Taub then discussed what it took to keep a press running at all times, a key to profitability. In the case of a conventional Heidelberg press, among the needs were an uninterrupted supply of correct plates, rapid plate changeover, and alignment of the plates color-to-color on the various stations of the press, a timeconsuming operation which was what he said “you really pay for when you submit a job for an offset press.” With an Indigo press, all that was needed was an uninterrupted stream of correct data after an initial calibration. As many jobs

OCR for page 71
Deconstructing the Computer: Report of a Symposium FIGURE 22 Opportunities for digital publishing. could be sent through the press as could be lined up one after another, with the first sheet out in principle being usable. “When you finish one job, you just print the next job,” he said, “so this really stops being a technology or press-related event and starts to be basically an information-technology problem.” Turning to the process of feeding data to the press, Dr. Taub presented an explanation of how a contemporary direct-selling campaign was developed. Beyond simply printing text or images on paper, the campaign designer needed to integrate customer relationship management (CRM) data and content data into the design of the campaign or its materials. Delivery to customers could be effected in printed form, on a laptop, or via PDA or cell phone, presentation being a major issue in campaign design; flexibility up front would enable delivery using a variety of outputs without the material’s needing to be redesigned. Based on feedback, CRM and campaign-management systems could be adjusted to increase the effectiveness with which the data could be used on the next campaign. The greatest challenge for digital publishing, and one of its most telling metrics as well, appeared to be cost per page, an issue encompassing reliability, usability, and labor. Digital publishing was “clearly a lot more expensive” than

OCR for page 71
Deconstructing the Computer: Report of a Symposium offset printing, Dr. Taub admitted, adding that for the former to become “palatable” to potential customers cost had to be driven down “considerably.” In addition, digital publishing would have to offer the same options offered by an offset press, such as working with different sizes of paper and providing a variety of finishing capabilities. Yet even more important was Web-efficient work flow, which would allow the press to be in constant operation. Databases would need to be integrated in order to create custom publications efficiently, which meant taking one source of content and generating multiple outputs from it. This was no small chore in the case of large companies with multiple sources of customer data—HP, classing among them, had five major databases devoted to this. Owing to the length of the path the data would have to follow, security protecting the marketing campaigns from the eyes of competitors would become a very important consideration as well. “It’s a little bit like Napster,” he noted. “Once the things become digital, it’s much easier for them to escape, and you need to have a level of security that companies will be comfortable with.” Finally, the significant investment required to go digital was complementing the inertia that tends to stand in the way of change. “We need to help the commercial printers and enterprise customers make the transition,” said Dr. Taub. “They need to understand the benefits of digital” beyond being shown its capabilities, so clear from the marketing techniques he had been discussing. Absolutely necessary was convincing them that the correct measure of digital publishing was value per page rather than cost per page, because “you get so much more impact from a customized page than from the standard offset page.” Mr. Borrus noted that the personalization techniques described by Dr. Taub would raise interesting issues for those who were trying to measure productivity increases, as they would be obliged to apply a deflator to printing.