PROTOTYPE TEST SIMULATION
Prototype test simulation by computers is rapidly becoming an important part of the design process in many industries. For example, in the automotive industry crashworthiness design is increasingly performed by computer simulation rather than physical prototypes. Computer simulation of automotive crashes offers the promise of both reduced cost (a crash test of a prototype costs $300,000 to $700,000 versus about $100,000 for a simulation) and reduced design time. Similar trends are also evident in other industries, for example, the manufacture of glass, tires, and jet engines (to prevent catastrophic failure during bird impact).
Simulation also is being used in structure designs where, historically, the ultimate capacity has been "designed in" through empirical safety factors. For example, in the seismic design of dams, liquid-storage tanks, and nuclear power plants, nonlinear simulation is increasingly used to examine the ultimate capacity and the consequences of partial failures.
In addition, the entire area of weapons effects in the defense community relies significantly on nonlinear simulation. For example, determination of the ultimate capacity of protective structures is often performed by a combination of scale-model tests and computer simulation. Computer simulation has also had a dramatic impact on the design of armor and armor-piercing weapons.
FINITE ELEMENT METHODOLOGY
The major catalyst for this rapid evolution has been the development of efficient finite element methods for nonlinear mechanics. The past two decades have seen the emergence of powerful nonlinear finite element algorithms and rapid advances in the fundamental understanding of nonlinear mechanics and material behavior. In the past the use of fully integrated finite element methods proved to be too expensive for practical computation. On the other hand, underintegrated elements were unstable and produced meaningless results. Further research, however, led to theories of numerical control
for stabilization methods and made possible significant improvements in the efficiency of nonlinear computations.
These new methods have been critical in the adoption of nonlinear techniques in crashworthiness analysis. The models required are very large and complex and involve many difficult phenomena such as contact-impact friction, local buckling, and strain-rate effects. A typical model is shown in Figure 2.1. Such a simulation still requires 20 to 80 hours of supercomputer time; without these techniques, the computer time required for a simulation would be prohibitive.
These advances in nonlinear techniques have been critically dependent on the advances in continuum mechanics made in the 1950s and 1960s and their adaptation to a form that facilitates rapid computation without violation of basic physical principles. Today, nonlinear computer programs increasingly make use of frame-invariant formulations, which are applicable to arbitrarily large strains, and advances in the theory of plasticity to describe material behavior in the nonlinear range. However, these procedures have been streamlined by algorithmic developments such as the so-called radial return form for plasticity and domain integration for fracture.
Despite these important advances, in many respects the field is in its infancy and the many simulations that engineers want to perform routinely cannot be handled reliably today. Among the major handicaps is the inability to treat strength degradation that occurs near a structure's failure point.
In this regime the governing equations often change type from hyperbolic to elliptic and vice versa. Without scientifically sound methodologies for treating such behavior, reliable calculations are impossible for many materials due to the long computer times required for many simulations to achieve even a modicum of accuracy. It is evident that bringing the simulation times to the level needed to make them truly useful in the engineering design process requires adaptive methods and implementations on parallel computers.
Adaptive methods are essential for streamlining the modeling process. Today, separate finite element models are made for both linear analysis used for normal performance and nonlinear simulation. This duplication often requires 3 to 6 man-months of effort and takes months in the design process. Adaptive methods will enable the same models to be used. Then adaptivity will provide the mechanism for allocating elements where they are needed in a particular nonlinear simulation.
The treatment of failure will require methods that can effectively handle multiple scales and coupled processes. There are 3 important classes of failure: (1) buckling, (2) fracture, and (3) failure due to strain localization. The last two, in particular, usually involve scales far smaller than that of the test objects. Although adaptive methods offer substantial promise, current adaptive methods are not sufficiently capable of treating these problems. Even in buckling, the latter stages often exhibit severe localization of deformation due to plastic hinge-line formation.
These processes are often coupled to heat transfer and even the kinetics of phase change. This coupling adds an additional dimension of complexity, for it often involves different time scales. All of these difficulties must be resolved effectively to make computer prototype testing a practical tool.
The efficient implementation of nonlinear mechanics algorithms to new computer architectures will require careful coordination between the mechanics modeling processes and the computer architecture. Past experience has shown that compilers simply cannot extract even a small percentage of the computer's capability with an algorithm designed for a von Neumann computer. A careful redesign of the algorithm based on an analysis of how it can be rearranged cognizant of the mechanics to be simulated is crucial to the development of effective algorithms and software.
The payoffs in achieving these goals will be substantial. If computer simulation of prototype testing can be integrated into the design process, lead times for new designs can be reduced, and more effective and economical designs can be developed. For example, in automobiles, actuator sensors must be placed so that they can detect impact with an object, such as from a full-scale frontal crash. This is a trial-and-error process that benefits from feedback. If feedback is provided by prototype tests, the process is slow and new ideas require considerable confirmation time. Development of a faster simulation method can dramatically reduce design time.
Similarly, in weapons-effects work the development of methods that can simulate near-failure regimes will provide more reliable estimates of weapons and targets in such regimes. Many products that are now routinely tested for ultimate capacity and redesigned several times before introduction to the market can rely on prototype test simulation for much of the design process. The insight provided by computer simulation, which can provide a much more complete picture of the product's behavior, and the reduced design times will contribute immensely to U.S. competitiveness in many industries.