Click for next page ( 4


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 3
CRITERIA FOR LABORATORY EXPERIMENTS USEFUL IN FIELD SITUATIONS1 Wilson P. Tanner, Jr. Sensory Intelligence Laboratory The University of Michigan It is of the utmost importance at the present time to design labo- ratory experiments useful in describing human behavior so that data relevant to the problems of U.S. space and military efforts may be obtained. If scientific laboratories are to continue to look to the government as a major source of financial support, the problem is critical to scientific progress. The solution of the problem does not exist in leaning more heavily toward field studies. Neither does it exist in increasing the rate of laboratory experiments as now conducted, for these tend to ignore many of the significant variables. The solution requires a more careful study of the problems, leading to a more nearly precise state- ment of the interacting variables which need careful examination. Laboratory experiments should, then, be designed to study these variables along with the interactions. The design of laboratory experiments to accomplish these desired objectives may require the development of new experimental techniques and new meth- ods of analysis. DESCRIPTION OF THE EXPERIMENTAL PROCESS One might begin by defining the purpose of an experiment as the reduction of uncertainty about a particular phenomenon or set of phenomena. The experiment itself is like an optical instrument designed to look at the phenomenon. The observations of the scientist using the instrument constitute the data. The scientist 1. This work was supported by the U.S. Air Force, Office of Scientific Research, Grant No. AF-AFOSR-367-63.

OCR for page 3
is the counterpart of an observer in a visual experiment, and his interpretation of the experimental results is the observer's re- sponses. In other words, when one performs a visual experiment, he is studying processes very similar to those he is performing in conducting the experiment. The same theoretical framework can be applied to the task of evaluating the performance of ex- perimenters as is applied to the evaluation of observers in psy- chophysical experiments. Both problems can be illustrated by the same block diagram (Fig. 1). In the simple psychophysical experiment, the message Message ensemble Transmitter Perturbance i er + < Observer FIG. 1. Block diagram. consists of a set of signals. In a vision experiment, one such set consists of two signals: a light flash with finite energy greater than zero, and a light flash of zero energy. Ideally, a random selection of the members of the set is made, and the selected signal is then transmitted through the channel. As the signal traverses the channel, it is perturbed. In the case of light sig- nals, the energy spreads and random or irrelevant photons from the environment are added. The observer's input, then, is some combination of the transmitted signal and of the perturbances. It is the task of the observer in responding to indicate which of the signals of the ensemble was responsible for that particular input. The data for such an experiment are summarized in terms of a measure indicative of the average reduction in uncertainty that can be attributed to the observer's responses. In other words, if the observer's response is known, can the selected signal be better stated than when such knowledge is not available ? In infor- mation theoretic terms, the entropy of the source, minus the con- ditional entropy of the response, gives the desired information content for the experiment. The analysis of an experimenter's behavior in terms of the same block diagram leads to surprisingly parallel statements. The message ensemble is a set of hypotheses, each with an

OCR for page 3
associated probability. The entropy of this set is the uncertainty of knowledge, prior to the experiment regarding which hypothe- sis is "true." One of the hypotheses in the set is presumed to be selected by nature for transmission. The transmitter and the channel constitute the experimental design and conduct. The data combine to constitute the input to the observer who, in this case, is the scientist. His response is a scientific publication which, hopefully, leads to a different set of probabilities associated with the hypotheses in the ensemble. The entropy of the latter set is the uncertainty associated with one's knowledge posterior to the experiment. The difference between the a priori entropy and the a posteriori entropy is the information content of the experiment. In other words, how much more is known about the "truth" of the hypotheses after the experiment than before? Describing the experimental process in terms of the block diagram is essentially a statement of the problem of the design, the execution, and the interpretation of the experiments. The value of a problem statement is determined by its contribution to the solution: How does this statement lead toward a solution ? First of all, the experiment is defined as an instrument to convert probabilities associated with hypotheses from one value to another. The conversion indicates a Bayesian procedure. Letting X(E) be a function of the experiment, and P(H .) be the a priori probability that Hj is the true hypothesis, Bayes' theo- rem states the a posteriori probability associated with Hj as P(H)P [X(E)J 1 H Pw [X(E)]+P(H)P- [X(E)J ' tl~ 1 n.. Examination of this equation leads to certain obvious state- ments. 1. Any hypothesis with an associated a priori probability equal to zero will have an a posteriori probability equal to zero. 2. Any hypothesis with an associated a priori probability equal to unity will have an associated a posteriori probability equal to unity. 3. The statement of associated a posteriori probabilities is a function of the statement of the associated a priori probabilities. If the probabilities are interpreted as degrees of belief (a reasonable interpretation from the information theoretic point of view), examination of the above statements suggests ways in which a scientist introduces his biases into the design, the exe-

OCR for page 3
cution, and the interpretation of experiments. At the outset, the first statement indicates that complete disbelief in an hypothe- sis eliminates that hypothesis from consideration. For example, early experiments conducted within the framework of the theory of signal detectability were not considered by this author in terms of extrasensory perception, although they were by another scien- tist. Fortunately, he did not have complete belief in his hypothe- sis and a posteriori, as an explanation for the results, he associ- ated a small probability (p < e) to extrasensory perception. Another example is that of the experimenter who determines a threshold by having an observer turn a knob until he sees or hears a signal. Built into his design is a credibility of unity as- sociated with the threshold concept. His results are unlikely to question the validity of the concept. The third statement illustrates the most serious controversy involving the use of Bayes' theorem. How can a set of hypotheses have associated probabilities in the face of a complete lack of knowledge ? Perhaps the possible hypotheses cannot even be enu- merated. The answer to this dilemma exists in a philosophy of science. As long as one is concerned with a finite set of data, there is an infinite set of possible hypotheses. The probability of identifying that which is true is zero. Thus, the scientist must be content with the knowledge that the probability of proposing an incorrect hypothesis or theory is unity. Once this attitude is accepted, it is again possible to proceed. Watanabe (1960) has demonstrated that if the set of hypotheses has erroneous associated probabilities, repeated experiments with the application of Bayes' theorem will nevertheless lead to a convergence on the most likely hypothesis of the set. This theorem is a fortunate result, for without it experiments would be useless. If a correct statement were required a priori, this statement would have the same information content as that usu- ally sought as the result of an experiment. If the result could be obtained a priori, there would be no need either for the experi- ment or for concern with Bayes' theorem. Watanabe's theorem states that the convergence is to the most likely hypothesis of the set. The set may or may not contain the "true" hypothesis. What, then, is meant by the "most likely" hypothesis? It is that member of the set that is most likely to describe the data. From this point on, the task of an experimen- ter will be considered that of finding the most likely hypothesis of a set. He is not worried about truth since he knows that this is a fruitless attack. The usefulness of his work, either with 6

OCR for page 3
regard to scientific or practical application, depends on the choice of a useful set of hypotheses with which to work. Further examination indicates the required content of the data. The term on the left of the equation is the a posteriori proba- bility. Contained in the expression on the right are the a priori probabilities and some conditional probabilities. The theorem can be rewritten to express the additional terms as a single operator. P(H ) {f [X(E)J/f 1 1 Hl P •X(E) v I7 P(HJ If [X(E)]/f 1 Hj Hj where f(X) is a probability density, and the ratio is described as a likelihood ratio. Thus, the information content of an experi- mental result X(E) is contained in a set of numbers which are functions of the hypotheses being tested. If a particular result is equally probable under two hypotheses, it furnishes no infor- mation on which to base a choice between the hypotheses. Care- ful and precise statements of the hypotheses to be tested are essential to efficient experimental design and will point the way to experiments not likely to lead to results equally probable under the various hypotheses. THE SIZE OF THE EXPERIMENT In an attempt to determine how incorporation of a priori knowl- edge influences the size of the experimental task, some calcula- tions have been performed. The following assumptions are in- volved in the computations. 1. The hypotheses are each orthogonal to the others. 2. The hypotheses are a priori equally likely. 3. Each hypothesis, if true, leads to an observation contain- ing equal energy. 4. One of the hypotheses is "true." The amount of energy required to lead to a particular level of confidence was determined as a function of the number of alter- natives in the set. For the two cases studied (confidence of 0.75 and 0.90), the energy was found to be linear with the logarithm of the number of alternatives. Since, under the assumptions, the total energy contained in an experimental result is the energy per observation times the number of observations, the size of the experiment required to lead to a particular level of confidence

OCR for page 3
16 32 NO. OF ALTERNATIVES FIG. 2. Number of trials required to achieve a given level of confidence as a function of the number of alter- natives, d' = 1 is assumed. is linear with the logarithm of the number of hypotheses to be tested (see Fig. 2). The fact that the size of the experiment needed to develop a particular level of confidence is linear with the logarithm of the number of hypotheses to be tested leads to a consideration of the problem of the statement of the hypotheses to be included in the set. One can begin the process by describing as carefully as possible those hypotheses that appear a priori likely. Ideally, the next step is to define a mathematical space that includes each member of the set as well as all linear combinations of elements of the set. Given such a mathematical space, one should then attempt to describe a new set of basic hypotheses which span the space and are orthogonal to each other. The new set is the basis for the experimental design. At this point, it should be observed that the problem is gradu- ally being shifted. It is no longer that of choosing one of a finite set of hypotheses. It is rather that of searching for a set of co- efficients applying to the orthogonal axes of the space. The coef- ficients are used to describe a point in a continuous space, this point representing the "most likely" hypotheses of an infinite set. The coefficients are similar to the factor loadings of factor analysis. The dimensionality of the space, however, is deter- mined a priori rather than a posteriori. The coefficients thus are more like those of the Fourier analysis of electrical waveforms.

OCR for page 3
Consider the possibility of computing the number of observa- tions necessary to reduce the entropy of a parameter coefficient from one value to another. The terms are defined as follows: a j = initial variance, ff2Q = variance associated with an observation, cj2 = variance of estimate following the experiment. Now, treating these variances as representing Gaussian noise and letting No equal the number of observations in the experi- ment, the entropy can be written as H(I) = log ireffj = initial entropy, H(E) = log we (a^0/NQ) = entropy of observation, H(P) = log TeOp = posterior entropy. Then the reduction of entropy as a result of the experiment is R(E) = H(I) - H(P). By writing CT^J = CT^J/N as the initial variance representing a number of previous observations, and by writing 0% - <3^Q/^^^ as the a posteriori variance in terms of a variance dependent on both the observations prior to and during the experiment, then R(E) = logffefc/N) - log ire \ = iog (i+ [No(ff2/«r2o)]}. Solving for N 2 r 2 2 , 2 2 No =CTo[(3I-%)/ffI%l = (T2oL(l/a2p) - d/a2:) I- The last equation indicates clearly that the number of obser- vations required of an experiment, if the a posteriori result is intended to be within a previously specified level of confidence, depends on the incorporation of prior knowledge. The greater the prior knowledge, the smaller the experiment required. 9

OCR for page 3
The discussion of size of experiment to this point has been entirely in terms of estimating a single coefficient. If one tries to extend this to a set of W orthogonal coefficients, then a band- width term is introduced, and each of the entropy terms must then be multiplied by W, as must the size of the experiment. ILLUSTRATIVE EXPERIMENTS The Imperfect Memory Problem An example of an experiment in which the suggested technique was used is one performed by the author (Tanner, 1961). In attempting to explain the shape of the psychometric function for the detection of acoustic sinusoid segments in noise, the human observer was conceived as having an imperfect memory. The problem thus became one of establishing an hypothetical space for describing imperfect memories. The first step was an examination of the knowledge necessary to a perfect memory. This led to the identification of a set of parameters that would describe a segment of sinusoid in its en- tirety: amplitude, starting time, duration, frequency, and phase. If the memory is not perfect, it seems reasonable to assume that the imperfection will lead to an error in the recorded values for these five parameters. The error has the same effect on per- formance as an uncertainty in the specification of the signal. For example, if there is no phase memory, but all other memories are perfect, then the performance is expected to be that of an ideal receiver detecting a signal specified except for phase. An imperfect memory has the same effect on performance as a signal with uncertain specification. Thus, the measure of an imperfect memory is the degree of uncertainty necessary to ac- count for an observed level of performance. In the experiments, frequency and phase were grouped as a single parameter, starting time and duration as a second parame- ter, amplitude as a third, and internal noise as a fourth parameter. These four dimensions were assumed to be independent and to be spanning the space in which the likely hypothesis can be described. Experiments were then performed in order to estimate numbers describing an uncertainty introduced by memory imperfection for each of these parameters. There was actually a sequence of experiments involved in esti- mating the parameters. The first of these, being a study of ampli- tude memory as a function of time, provided both an estimate of 10

OCR for page 3
the internal noise and uncertainty in amplitude memory. The memory requirements for frequency, phase, starting time, and duration were removed by superimposing the signal to be de- tected on a pedestal—a segment of sinusoid of the same frequency, phase, starting time, and duration as the signal. This pedestal occurred in each of two intervals in time, with the signal imposed on the pedestal in one of the two intervals. It was the observer's task to state whether the signal was superimposed on the first or second pedestal. In the experiment described in the preceding paragraph, the variable was the time between the ending times of the two pedes- tals. It was assumed that the amplitude of the first pulse was measured and then stored until the second pulse for comparison with the measure of that pulse. With the assumption that the ped- estals provided the observer with the frequency, phase, starting time, and duration knowledge, then the only uncertainty in this experiment consisted of that introduced by the internal noise to the measures of the amplitude, and a variance added to the first measure as it was stored in the memory. After the data were analyzed for an increase in variance, as the time between the intervals was increased, the curve was extrapolated to zero delay to estimate the internal noise. A second experiment was then performed in which the knowl- edge of starting time and duration was removed, although that of frequency and of phase was still provided. The signal to be de- tected was superimposed on a steady sine wave component added to the noise. The component was of the same frequency and phase as the signal. In order to estimate the uncertainty introduced by the inability to store starting time and duration, it was assumed that the values of the parameters for amplitude and internal noise from the first experiment for each observer still applied. In a third experiment, the continuous wave was removed from the noise, and a requirement for memory of frequency and phase was introduced. Thus, all of the memory requirements are now demanded of the observer. A parameter estimating frequency and phase uncertainty was determined from this experiment, again on the assumption that the parameters estimated from the previous experiments still applied. The uncertainties estimated were these: the internal noise reduced efficiency by about 0.3; the variance enlarged by the amplitude memory added one unit of noise every 400 msec: the 50-msec signal appeared to be fixed within a 75-msec interval: 11

OCR for page 3
and the frequency could be described as being defined within an interval of 80 to 100 cycles (Tanner, 1961). The Problem of Vigilance Behavior Another example of the proposed design criteria may be found in an analysis of the subject of vigilance behavior. Briefly, the vigilance situation involves tasks in which small, infrequent sig- nals occur at random intervals over long periods of time. There have been a number of experimental studies dealing with varied situations of this general type; several writers have also ad- vanced theoretical formulations with the hope of describing vigi- lance behavior within a general framework. The kind of effect traditionally observed was that a rapid deterioration in correct signal detections appeared to occur during the task period. An early experimental observation was that these detection rates in- creased as a function of the input signal rate and decreased with the variability of the intersignal interval. Indeed, the addition of "artificial" signals mixed in with the actual ones appeared to be helpful. The following hypotheses have been suggested in explain- ing these effects: 1. lowered arousal or alertness level due to task monotony, 2. fatigue or accumulation of inhibition over time, 3. low expectancy, and 4. distraction or attention shifts away from the task. Two recent theoretical approaches are those of Broadbent (1964), and Jerison and Pickett (1963). Broadbent argues from the point of view of signal detectability theory that perhaps many of the observed vigilance effects are due to criterial shifts over time rather than sensitivity shifts. He noted recently, however, that the data bearing on this question are ambiguous. Jerison and Pickett introduced the concept of "value of observing" in the vigilance experiment. Their construct controls the probability of observing. They suggest that detection failures are attributable to the fact that the observer was not observing at the time the signal occurred. It seems evident from the data presented that these writers are not theorizing about the same phenomena. The difference in explanations suggests that the experiments involved hypotheses existing in different spaces. The practical problem is that it is highly desirable to describe behavior which might exist in certain field situations. Laboratory experiments designed to meet this goal should, then, fall in the same descriptive space as the practical situations. For example, an explanation requiring the concept of "value of observing" is clearly not in the same 12

OCR for page 3
space as some tasks of practical interest where the cost of not observing is prohibitively expensive. Indeed, if it were not, there would be no interest in describing behavior in such situations. Thus, defining characteristics of vigilance tasks are the occur- rence of signals, the worthiness of observing these signals, and the uncertainty of the arrival time of the signals. In some of the recent studies investigating the decision theory type of explana- tions of vigilance decrements, such characteristics of the tasks were absent. Other constraints may be crucial for several rea- sons. First, the signals, when they occur, must not be completely discernible to the observer. Second, the nature of the decision rule employed by the observer may be an important function of the expected time distribution of the signals. An important ex- perimental parameter of what should be considered "vigilance" situations is the degree of uncertainty of the observer concern- ing the starting time and duration of the possible signals. Realiz- ing that though the early studies by Mackworth (1950) on vigilance did employ clear signals, i.e., clock-pointer double jumps, it seems obvious that again the interest is not in the analogous field situation. It seems evident that such tasks could be easily automated; hence, one could safely avoid the possibly hypnotic effects of clock-pointer watching. The general problem of memory in such tasks may be an ex- tremely important one and may serve as a possible descriptive dimension. For example, having available noiseless stored reference parameters of the expected signals conceivably can improve detection performance and, indeed, may serve as a partial explanation of the facilitative effect of artificial signal insertion. The decay of such a memory may explain performance decrements over time. An example of an hypothetical vigilance situation, as defined here, illustrates the advantages of the cri- teria proposed. Consider a detection experiment in which an observer must participate for some period of time. The input is noisy, and occasionally, although infrequently, a signal will be in the noise. The observer is allowed only a fixed number of detection responses; his task is to remain solvent until the task time period is over. He may lose his solvency either by spending all of his detection responses before the time period is over, or by failing to detect and thereby turning off an incoming signal within some short time interval after its arrival. A considerable monetary reward is the payoff for remaining solvent throughout the task time period. Admittedly, such a situation has some dif- ficulties in theoretical analysis. Certain bounds on efficient per- 13

OCR for page 3
but covered by a flat white top to reflect the desert sun. Dimen- sions of this vehicle, which were furnished by the Patrol, have been used in the calculations. The terrain background reflectance, from information supplied by the San Diego Museum of Natural History, was estimated to be similar to that measured in other desert locations during field expeditions where the soil and vege- tation closely resembled those in the border area. Assumed values of reflectance of terrain, road, vehicle, and dust cloud, together with physical dimensions of the vehicle and an 8 ft wide road, have been combined with an assumed contrast transmittance of the path of sight of 0.77 in order to derive the probabilities of seeing, as shown in the last column of Table 1. Parenthetically, it might be noted that, had the vehicle been stationary, it would very likely not have been seen; the dust cloud at the time of ob- servation not only added to the positive signal contrast but may have obscured the vehicle's shadow which, without dust, would have tended partially to cancel out the lighter vehicle at the angu- lar size noted. One may conclude that this sighting was entirely credible. Case II During a pass over the high Tibetan plateau (ground elevation 16,000 ft) Major Cooper reported seeing, on an east-west road, a dust cloud blown by a "wind out of the south" which he inferred to be a "stiff breeze" from the angle it appeared to assume rela- tive to the ground. At the confluence of the dust cloud and the road, he reported seeing "a light spot" which he interpreted to be a vehicle. An attempt was made to discover the most likely characteris- tics of both the road and the terrain; and what is believed to be reasonable data from the Laboratory files was taken for use in the calculations. The probability of seeing the road, as indicated in Table 1, is in excess of 0.99. If one guesses that the vehicle might have been a 2.5-ton truck with a light top, the probability of its detection is 0.50. Case in Near some of the Tibetan roads, Major Cooper reported seeing small villages and, occasionally, "squarish houses." "I noted... the wind direction on the ground due to smoke coming out of smokestacks and out of the fireplaces (sic) of houses." Tibetan dwellings in the area of interest are found (in National Geographic Magazine photographs) often to be rather large, 74

OCR for page 3
3 * 1 ™ OO O GO O5 O 0 O O O5 OS O5 ^ o o o o V A odd d d d A ft « g 0 B w $ CB ^ S «* 3 S Q Pa C OMB 0 O tpH +* ti t- t- t- CO CO tO CO 111 t» t- t- CO CO CO CO W o £ o o o odd d d d § •§ II p 00 00 00 c- o t- t- O5 05 ^ ra iH TH i-i o o o o 0 O a ja O O O odd d d d •a o S -M 3 0 co cq CO O O N IA co n co i> (N ffs i-l CO CO rH o n M S1 d d odd d d d < O i i1 t3 s. a> R « O5 s a - g s m rf 8 8 D 'II 1 8 ,H 1 2 il* (D M 0) +J H * - "c3 S S III .£; o* oo W o ^ B™ N CT OH ^H 3 h CO OO lO 00 CO W CXI OO (N I "O Q, 1 O o S o „ T3 h 13 '« 3 K s | ° 3 5 fl tj M_( Q) * « 1 S 4 1 ! I I C ^ C3 O g 111 O S > > Q (N 53 KI H CO CD 13 S 2 3 00 S S CO 00 a .S 111 'co o" 1 § •H O S , °- S ® O "3 o o |sS o S S d p b ^o I 3 d 1 O I I rt g U |u nj 3 H H a - " = fc O 75

OCR for page 3
multifamily houses with white (whitewashed ?) sides and dark roofs. The lighting which prevailed at the time of the sightings should have caused the sides of these dwellings to be brightly lit, and to form, in consequence, a high positive contrast with the terrain background. Using terrain reflectance values which seem reasonable for the region, it is calculated that a brightly-lit build- ing side having a projected area of 138 sq ft in the direction of Major Cooper's line of regard would be seen with a probability of 0.50. This probability increased markedly with size, so that, for example, a vertical wall having twice the area, i.e., 276 sq ft, would generate an optical signal of detection probability greater than 0.90. Some detective work led to the discovery that the smoke issuing from these structures may have produced a posi- tive contrast sufficient to be seen with a probability of 0.50, for the usual local fuel in these timberless regions is yak dung, which yields a dense, light smoke. (It has been suggested that this fact may have led to the traditional expression used by Tibetan housewives: "Oh my baking yak!") In any case, it is believed the sightings are reasonable. Case IV In what is believed to be part of western China, Major Cooper reported seeing a railroad track running in a northeast-southwest direction. The track was seen as darker than the terrain, and at one point on it he reported "an extended target, lighter than the track" with a plume of white (smoke or steam) at its northeastern end. This he interpreted to be a train proceeding in a northeast- erly direction. He stated that he believed the wind direction to have been southerly, owing to the angle formed between the white plume and the track. Under the assumed conditions it is believed that the roadbed should have been visible with a probability of 0.90. The white plume of steam or smoke under the same conditions would al- most certainly have been detected (probability greater than 0.99). There were other fascinating sightings reported by Major Cooper, such as a boat and its wake on a river in Tibet, nightside observations of cities and villages, lightning, and his remarks concerning the apparent color of terrestrial features from orbital altitude. Only in the few cases outlined above, however, did mak- ing the assumptions required for visibility calculations seem at all justified. At this point, a few comments can be ventured which may help in understanding why many people greeted Major Cooper's reported 76

OCR for page 3
sightings with skepticism, or felt obliged to ascribe his observa- tions to one or another extrinsic causes. The term "visual acuity" refers to a variety of discriminations of which an observer is capable. In all cases, it relates to the detection of a spatial dif- ference or discontinuity, and the subject is tested to find the smallest such difference he can detect. This value, generally expressed in terms of the subtended angle of the spatial element or its reciprocal, is taken as a measure of the visual acuity. A wide variety of test objects has been used in the investigation of this function, and the numerical results are widely disparate and depend on the nature of the visual task involved. Simplest of such tests, which are referred to as tests of the "minimum visible," involve the detection of presence of an object, such as a point or a line. Somewhat more complicated are those tests in which the objects contain some spatial discontinuity within themselves, such as a pair of small targets or a broken ring, in which the "twoness" of the points or the location of the gap must be discriminated. These tests are referred to as measures of the "minimum sepa- rable." Still other tests involve higher-order discriminations, such as form recognition, of which the ordinary clinical wall chart of Snellen requiring the recognition of letters, is typical. They are called measures of the "minimum cognizable." It is evident that the last-named measures of acuity are most often used in medical practice, and that the numerical values resulting from such tests are most familiar to the majority of the population. Since the Snellen charts are based upon the no- tion that 1 min of arc is required for the perception of form (based upon a statement of Hook, quoted by Robert Smith in 1738), it is firmly implanted in the popular mind that 1 min of arc angle represents the value of best acuity. After all, is it not often said that 20/20 scored on the Snellen test (from the line on which the letter stroke width subtends 1 min of arc) means per- fect vision"? Major Cooper's Snellen acuity happens to be 20712, or 0.60 min of arc, although, as is indicated below, this value is merely suggestive of his superior vision and does not represent a limiting value of visual resolution. Measures of acuity other than the conventional clinical wall charts yield quite different values, and, generally speaking, the simpler the test the more "acute" vision becomes. Only two studies are cited, although there are dozens in the experimental literature. These two have been chosen because the test objects are more closely analogous to the real objects sighted during the MA-9 Hight. 77

OCR for page 3
The first step is to summarize the data of Hecht and Mintz, who determined the minimum angular diameter required for a long wire to be seen against a uniformly luminous background. The subtended angle of the wire, which was seen as a dark sil- houette (contrast - -1.0), was found to decrease with increasing field luminance, reaching its limiting asymptote at 0.007 arc min. These data were taken from a single observer (Hecht), aged 45 years, and it is probable that Major Cooper, similarly tested, would better this result by a palpable factor. While the terrain backgrounds against which roads, rivers, and railroad tracks were seen were probably not as uniformly bright as those used in the experiments, still these data are most closely appli- cable to the visibility of such earth features. One variety of visual acuity comes from tests in which the observer is required to detect the presence of a discontinuity in an extended line. This measure, called "vernier acuity" from its resemblence to the visual task required in the reading of vernier instrument scales, is analogous to the situation in which an extended line is suddenly displaced by some small angular amount. An hypothetical example might be the case where a truck and its shadow combine to produce a pair of such apparent dis- placements. Experiments have shown vernier acuity values in the range of 1 sec of arc, or about 0.017 arc min. Both of the studies referred to concerned targets of essen- tially -1.0 contrast, the lower limit for targets darker than their backgrounds. Targets which are darker than their terrain back- grounds may approach this value, but, owing to contrast losses suffered because of the presence of the atmosphere, will always be of lesser contrast and concomitantly reduced discriminability. The quantitative features of this situation may be calculated in order to arrive at visibility estimates. When targets are brighter than their effective backgrounds, however, no upper limit on con- trast is imposed, and it is common to see angularly tiny objects (such as stars, distant lights, sun glints, and the like) provided only that sufficient light from these objects reaches the eye. The light-colored vehicles reported by Major Cooper may be a case in point. A final point should be made in regard to the use of laboratory data in predicting the performance of an observer in a real-life situation. By and large, the numerical results of these experi- ments are estimates based upon large numbers of observations, and almost always refer to that value of angle that is necessary for discrimination to be successful one-half of the time. There 78

OCR for page 3
are statistical considerations that make this a desired value which need not be gone into here. It must be emphasized, how- ever, that the numbers so derived represent only a single point on a continuum—that there are larger visual angles which will result in greater certainty of seeing, and smaller ones which yield lower probabilities of seeing. That is to say, smaller tar- gets than those indicated will occasionally be seen, albeit less frequently. This fact, together with the likelihood that Major Cooper is a superior observer, and with the unquestionable fact that he is highly experienced in high-altitude observation, make it very probable that estimates based upon laboratory data may be conservative, indeed. In sum, it is concluded that the terrestrial objects reported by Major Cooper from the Faith 7 capsule could, in fact, have been seen under the conditions that have been assumed to have prevailed during the MA-9 mission. It is not necessary to invoke any exotic environmental or psychological factors in order to account for these sightings. Finally, reconstructing the event merely indicates the possibility of the sightings, and in no wise proves them to have been made. An opportunity to perform con- trolled experiments during future space flights is, therefore, anticipated with great enthusiasm. The first of these is described by Dr. Duntley elsewhere in these Proceedings. 79

OCR for page 3
GEMINI IN-FLIGHT VISUAL-ACUITY EXPERIMENT S. Q. Duntley Scripps Institute of Oceanography University of California As a result of astronaut Gordon Cooper's reports of sighting small objects on the ground from Mercury Flight MA-9, there is considerable operational and scientific interest in an experi- ment which will test the existing methods of predicting the visual capabilities of observers in space. It is hoped to determine under carefully documented conditions the effects of prolonged weight- lessness, 5 PSI oxygen breathing, and other environmental con- ditions peculiar to space flight on the astronaut's visual-per- formance capabilities as a function of time. It is intended to obtain information by measurements prior to flight on the visual capabilities of the astronauts who will be involved in the seven-day or longer missions. Also, they will be trained in the tasks which they will have to perform in flight. The astronauts' performance in two visual tasks in flight will then be measured in flight as the mission progresses. The astronauts involved in missions GT-5 and GT-6, and/or GT-7 will be required to measure their own visual acuity during the mission with the aid of an in-flight vision tester, which will be provided by the Visibility Laboratory of the University of California, San Diego. This task will involve the use of the tester by each man once a day throughout the flight. He will report the result of his test to the ground each day. In addition, during or- bits which pass within range of a prepared ground target-area, the astronaut in the right-hand seat will be ask to determine the orientation of each of (approximately) 12 rectangular targets which will be arranged in a line from west to east approximately 10 nautical miles long. The astronaut in the left-hand seat will orient the spacecraft so that the right-hand astronaut will have .the optimum view of the target area. The astronauts will be 80

OCR for page 3
familiar with the location of the target site and its general con- figuration, and a suitable method will be provided for locating the target area. All targets will be above detection threshold but will bracket the astronauts' ability to determine their orientation. The observing astronaut will call out the orientation of the targets, and his answers will be conveyed to the ground by radio. Depend- ing on the results of the experiment, the size of the ground tar- gets may be changed between days of the mission to insure that the proper range of target size and contrast is presented. The optical condition of the window being used by the observer will be monitored continuously throughout the observing period (approximately 2 minutes) to determine the amount of earthlight being scattered by the window. This is necessary in order to ob- tain Quantitative information on the astronauts' performance, as the apparent contrast of targets will depend on the manner in which the contrast is degraded by passage through the window. The necessary information will be obtained by an in-flight photom- eter which will be mounted on the 16-mm camera bracket in the right-hand corner of the right-hand window. This photometer will be aligned with a small, circular light trap which will be mounted outside the window on the hatch immediately in front of the pho- tometer, about 9 inches in front of the window. The output from this photometer will be telemetered through the high-level dumped telemetry system. In order to determine if the scattering from the window is uniform and, if not, what the degree of nonuniform- ity is, the spacecraft will be rolled over so that the right-hand observer is looking into black sky but sunlight is obliquely illumi- nating the window. The astronaut will then remove the photometer from the 16-mm camera bracket and scan the window manually, using the black sky as his light trap. The output from the tele- photometer will again be telemetered during this operation, and the telemetered information will be time-correlated with the voice record which he makes during this task. A meter on the rear of the telephotometer will permit the astronaut to make his own determinations of the scattering from the window, or mea- sure the luminance of any other target of opportunity that may interest him. The experiment will be performed only on a seven-day or longer mission as the purpose of the experiment is to determine the "longitudinal" effects of spacecraft environment. It will be necessary as a very minimum to perform the ground observa- tion portion of the experiment near the start of the mission and near the end. It is expected that this observation will actually be made on each pass within range of the target area. A Visibility 81

OCR for page 3
Laboratory instrumented trailer-van will be at the target site suring the mission to document the light and atmospheric condi- tions at the targets. An Air Force C130, instrumented by the Visibility Laboratory, will fly over the target area at the time of the orbits used for sighting to document the pertinent optical properties of the atmosphere as a function of altitude. All of this information will be used to determine the nature of the optical signal available to the astronaut, and the Laboratory will then correlate this with his visual performance. A National Aeronautics and Space Administration van will be outfitted by the Visibility Laboratory and set up in Houston to measure the visual capabilities of the astronauts and to train them in the use of the in-flight vision tester. This will require 8 to 12 two-hour sessions for each of the astronauts who may be assigned to the mission. As this training may occur six months prior to flight, a brief refresher training will be given to the astronauts within two to four weeks prior to flight. In addition to the training at Houston, the astronauts will be flown in a C130 over the target area to familiarize them with its appearance and with the location of permanent landmarks. A scale model of the target will be laid out in this area and will be viewed by the astronauts through open hatches or ports from an altitude of 20,000 feet or less. The in-flight vision tester will be completely self-contained and require no interfaces with the spacecraft other than stowage. It will be used by both astronauts once each day. The astronaut in the right seat will use it on the orbit prior to his first orbit over the target for that day and report his results to the ground on passage over a suitable communication site in the United States. The device will be binocular, with an adjustable interpupillary distance which will be pre-determined for the astronaut and will be held by means of a biteboard inserted in the astronaut's mouth. Each astronaut will have a biteboard prepared for him, which will then properly position the vision tester. The astronaut will rotate a knob on the tester to a series of detented stops which will align targets in the field of view of the instrument. The astronaut will make a binary-type decision, i.e., yes-no, vertical- horizontal, or a-b, and will note his answer on a small card by punching the knob used to rotate the drum, thereby causing a pin to puncture a card if he determines "a." If he determines "b," he will not puncture the card but rotate the drum to the next position. The card, which is removable, will contain the results of the vision test, these results to be read by the astronaut either into his tape recorder or directly by radio link to the ground. 82

OCR for page 3
FLASH BLINDNESS John L. Brown, Chairman

OCR for page 3