real-world scene. In other words, significant compromises must be made. The compromises made by designers should be based on the best available evidence regarding the relationships between vision system features and objective performance metrics—not on wholly subjective criteria. Although many important lessons can be learned from the work conducted over many years on large-scale simulators concerning these issues, further research is clearly required.
Unanswered questions include: For a defined task domain, how should one compromise between spatial resolution and field of view? Is color worth the added expense and loss of spatial resolution? Is a stereoscopic display worth the trouble? If so, what are the appropriate parameters (baselines, convergence angles, convergence/accommodation mismatches) for various types of tasks? Can supernormal cues (e.g., depth, contrast, etc.) be used to advantage? Can the simulator-induced queasiness that accompanies the use of wide-field-of-view HMDs for some users be minimized or eliminated entirely? Which forms of delays and distortions induced by the visual display system can be adapted to? How are the required visual display system parameters affected within multimodal systems? Can visual display system requirements be relaxed in multimodal display environments? What are the perceptual effects associated with the merging of displays from different display sources? How do we design a comfortable HMD that integrates visual, auditory, and position tracking capabilities? These are but a few of the many research issues that impact the practical design of any visual display system for synthetic environments.