Virtual Reality and Augmented Reality in Aircraft Design and Manufacturing
David W. Mizell
The Boeing Company
At Boeing all new aircraft designs, starting with the 777, as well as new derivatives of older aircraft designs, are being specified as three-dimensional solid models. The projects I discuss below belong to a group of several advanced computing and human-computer interface R&D projects going on within Boeing's research and technology groups. These projects all exploit the fact that our products now are being defined both digitally and three-dimensionally.
The focus of our virtual reality (VR) project is on visualizing and interacting with aircraft CAD geometry, thus providing a VR environment almost identical to that inside of the full-scale physical mockups once built for each aircraft during the design phase. We believe that VR alone allows a person not only to visualize a set of CAD representations of parts but to "physically" interact with them—that is, move parts into and out of their installed positions, reach around obstacles, and so on. Several beneficial applications of this capability include the following:
- flight deck design,
- maintainability/accessibility verification,
- assembly planning,
- maintenance training, and
- the creation of maintenance training animations.
The primary goal of our work for the past two years has been to develop and demonstrate what we call the "egocentric human model." By this we
mean a VR capability whereby a participant perceives himself/herself, from a first-person point of view, to be inside the aircraft geometry, "wearing" a graphical human body whose positions and movements closely mimic his/her own. Position/orientation sensors on the participant's limbs, torso, and head provide the necessary information to the computer, enabling it to draw the graphical body (sometimes called an "avatar") in a corresponding position. Real-time collision detection software informs the participant if he/she has bumped into an obstacle. Someday, haptic feedback systems may enable the user actually to feel such a collision. In the meantime, we provide sound cues and make the object change color to notify the user of the collision.
The fundamental problem we face in trying to use aircraft CAD geometry as our virtual environment is one of scale. CAD geometry is orders of magnitude more complex than the scenes usually portrayed in VR systems. Any subset of interest to the aircraft engineers is likely to contain millions of polygons worth of geometric data, and rendering such data sets in stereo at 25 to 30 frames/second is a daunting challenge. Providing on-the-fly collision detection among such complex geometry is equally daunting. Our approach probably could best be characterized as "use every trick in the book," because all are probably necessary to handle geometry of this scale. The algorithmic techniques we are trying to combine include
- parallel rendering algorithms,
- upstream occlusion culling,
- object simplification and level-of-detail control, and
- substitution of texture maps for geometry.
Independently of the VR project, we are working on another technology involving a VR-style head position/orientation tracker; a see-through head-mounted display; and a belt-mounted, battery-operated computer. This combination makes up the hardware platform for our Augmented Reality (AR) system. Since the display is see-through, the AR system can be used to superimpose computer graphics on the surface of a real object the user is viewing. Because we employ a position/orientation tracking system, the computer can change the display whenever the user moves his/her head, making the graphics appear to be fixed on specific coordinates of the real object. Our goal is to have people who perform touch labor manufacturing tasks use this technology. At every step of a manufacturing or assembly procedure, the diagrams or text a worker needs to perform that step quickly and accurately will appear to him/her as if they were painted on the surface of the workpiece.
The critical technical issue for AR is the head position/orientation tracker. Current commercially available trackers are not adequate for factory use.
Typically, they only have an accurate range of about a meter, and they lose accuracy in the presence of metal or radio frequency (RF) energy, both of which are abundant in factories. The ideal tracker for AR use would be
- accurate to .01 inch and .1 degree over 25+ feet of range;
- able to provide position and orientation measurements at 25 Hz or better, with minimal latency;
- impervious to metal, RF, or acoustic interference;
- lightweight and low power so as to be body worn and battery operated; and
Our strongest candidate so far is a prototype "videometric" tracker built for us by Honeywell and TriSen, Inc., of Minneapolis. This videometric tracker entails a head-mounted videocamera, with some image-processing capability included in the wearable computer. The tracker finds fiducial marks (visible spots at known coordinates) on or near the workpiece and computes the user's head position and orientation relative to them.
We also are experimenting with a simpler wearable computer system—without a head tracking system—for applications where the user needs to be mobile, must have his/her hands-free, and needs to see a computer or enter data into a computer while performing his/her job. The classic example for the aircraft industry is being able to refer hands free to a digital maintenance manual while performing the maintenance operation.