The Nexus Between Science and Policy
Frequently, when people ask what I do, I tell them I live in the shadows between science and policy. My office, the Directorate for Accession Policy, is involved directly with public policy. In fact, we may be one of the few public policy offices within the Department of Defense (DoD). This is because the people with whom we deal—young men and women—are civilians. They are considering military service, but, at the time they make this decision, they are still civilians.
All other offices within the DoD with military personnel management responsibilities deal with Service members—people who already have entered service and taken the oath of appointment or enlistment. The rules that pertain to the two populations—civilians and military personnel—are fundamentally different. Because we deal with public policy and civilian youth, we have perspectives different from military personnel managers. Since many of us in Accession Policy are psychologists and we are involved with activities such as personnel testing, selection, and classification, we are sensitive to and attempt to comply with the procedures and rules established by our professional organizations. My personal philosophy is that you cannot set good policy without analysis, without data. Consequently, we attempt to define the various situations and issues we encounter so we can collect data that inform policies—hence, life in the shadows between science and policy.
One of the criticisms frequently directed at social science is that we spend a lot of time and money on research for which everybody already
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
Modeling Cost and Performance for Military Enlistment: Report of a Workshop Opening Remarks: The Nexus Between Science and Policy W.S. Sellman Frequently, when people ask what I do, I tell them I live in the shadows between science and policy. My office, the Directorate for Accession Policy, is involved directly with public policy. In fact, we may be one of the few public policy offices within the Department of Defense (DoD). This is because the people with whom we deal—young men and women—are civilians. They are considering military service, but, at the time they make this decision, they are still civilians. All other offices within the DoD with military personnel management responsibilities deal with Service members—people who already have entered service and taken the oath of appointment or enlistment. The rules that pertain to the two populations—civilians and military personnel—are fundamentally different. Because we deal with public policy and civilian youth, we have perspectives different from military personnel managers. Since many of us in Accession Policy are psychologists and we are involved with activities such as personnel testing, selection, and classification, we are sensitive to and attempt to comply with the procedures and rules established by our professional organizations. My personal philosophy is that you cannot set good policy without analysis, without data. Consequently, we attempt to define the various situations and issues we encounter so we can collect data that inform policies—hence, life in the shadows between science and policy. One of the criticisms frequently directed at social science is that we spend a lot of time and money on research for which everybody already
OCR for page 1
Modeling Cost and Performance for Military Enlistment: Report of a Workshop knows the answers and nobody wants. I believe the Joint Service Job Performance Measurement/Enlistment Standards (JPM) Project is a good example of the linkage between science and policy. It also illustrates how research can be conducted and then implemented in a way that actually changes practices and procedures. JOB PERFORMANCE MEASUREMENT AND ENLISTMENT STANDARDS The JPM Project had its origins in the misnorming of the DoD enlistment test. The version of the Armed Services Vocational Aptitude Battery (ASVAB) in use from 1976 to 1980 was miscalibrated. As a result, the Services enlisted hundreds of thousands of people who would not have met the enlistment standards, had the test been measuring ability accurately. When the department informed Congress about the misnorming, Congress was very concerned; however, there were two positive outcomes to this particular announcement. The first was that Congress encouraged the department to conduct the Profile of American Youth study, in which we administered the ASVAB to a nationally representative sampling of young people to develop contemporary norms. Until that time, the norms in use tracked back to the 1944 mobilization population. Today, after 14 years, the 1980 Profile norms are starting to get old. We now are involved in planning new norming studies, because never again can unqualified personnel enter service because of a flaw in our testing system. The second positive outcome of the miscalibration was creation of the JPM Project. Congress was surprised to learn that we validated the ASVAB and enlistment standards against performance in training. Consequently, Congress told us to link enlistment standards to job performance. With this mandate, we initiated the JPM Project in summer 1980. There were some early studies that considered the cost trade-offs between aptitude and performance on job knowledge tests. As we gained momentum, we established a Joint Service working group and asked the Service personnel research laboratories to undertake projects to measure hands-on job performance. Ultimately, we sponsored the Committee on the Performance of Military Personnel to provide state-of-the-art scientific oversight. When we started the job performance measurement research, recruit quality was low, in the aftermath of the ASVAB misnorming. In 1980, more than 50 percent of all Army recruits were in AFQT Category IV (percentiles 10 to 30), and only about 55 percent of new Army recruits were high school graduates. The department responded well to this personnel crisis, and with the change in administration, recruiting and advertising resources increased and we began to improve recruit quality. By the mid-
OCR for page 1
Modeling Cost and Performance for Military Enlistment: Report of a Workshop 1980s, we were recruiting 90 percent high school graduates with the percentage of new recruits scoring in Category IV falling to about 5 percent. RECRUIT QUALITY: ISSUE OF SCIENCE OR POLICY As the job performance measurement research evolved, we developed hands-on performance tests for about 30 occupations, which covered between 25 and 30 percent of all enlisted personnel. This was not done without enormous effort on the part of the Services and a lot of money. When we started the project, we had two basic objectives. The first was to learn: Can we actually measure job performance? For years, industrial psychologists had contended that performance was the ultimate criterion for validating selection tests. There always had been reasons why people could not measure performance, and basically it came down to cost. Measuring job performance is a very expensive proposition. With the support of Congress and the department's effort to recover from the embarrassing misnorming episode, the money for developing hands-on performance tests became available. The second objective was: If we can measure performance, can we develop procedures for linking enlistment standards to that performance? Because the Office of the Secretary of Defense (OSD) was involved in this effort, there was skepticism about our motivation by the Services. I said then, as I say today, OSD does not intend to set Service enlistment standards. We will not define them, nor will we enforce them. Rather, through a cooperative effort, we will develop procedures that will provide the Services a more scientific basis for establishing their standards. Throughout the 1980s, recruit quality continued to improve, and it became clear that the kind of personnel management problems that we experienced at the beginning of the decade were rapidly evolving into entirely new challenges. Because the department's recruiting budget had reached astronomical levels—about $2.2 billion—the question by 1985, became: How much quality is enough? As a result, there were many questions from members of Congress about why the Services needed this level of quality, and why we should spend so much money to attain it. In 1985, the department submitted a report to the House and Senate committees on Armed Services establishing military recruit quality requirements for the rest of the 1980s, and we were virtually defenseless to justify those requirements and the associated budgets. We could say that smart people perform better than less-smart people. We also could say high school graduates have more perseverance than non-high school graduates and stay in service longer. But if the questions were: What is the difference between having 90 and 95 percent high school graduates or between 60 and 70 percent recruits scoring in AFQT Categories I–IIIA (percentiles 50–99)?—we could not answer. Thus, the JPM Project began to take a different shape
OCR for page 1
Modeling Cost and Performance for Military Enlistment: Report of a Workshop and the policy question was no longer: How much quality is enough? Rather, the question became: How much quality can we afford? RECRUIT QUALITY, PERFORMANCE, AND COST By the early 1990s, we moved into a new phase of the project, which was to use job performance information in enlistment planning. We began developing models that allowed us to link recruiting resources, quality, and job performance. Once we had the models, if Congress said, what happens if we cut your budget by 10 percent, we could respond, quality would go down by x percent and performance would go down by y percent. We also were in a better position to defend our budget requests. As an aside, you have read in the press about Secretary of Defense Aspin's recent initiatives on readiness. Within OSD, readiness has become an important issue. Somehow, we must organize ourselves so we can measure readiness, and Mr. Aspin can counter charges that, with the serious cuts both in the size of the force and the Defense budget, the military again is like the hollow forces of the late 1970s. You also have seen in the press recent articles about declines in recruit quality, with the usual alarmist statements by senior officials, particularly within the Army, about the fragility of the All-Volunteer Force. With all due respect to our Army brethren, this is congressional testimony time, and it is useful to make statements that will defend the budget. The truth is that recruit quality is down somewhat. However, if you consider recruit quality levels for the 1980s, at which time the Services, including the Army, said the force was excellent, the quality that we have recruited thus far in fiscal 1993 is still considerably above the average that we experienced throughout that decade. In looking at predicted performance levels using the job performance model, we conclude that a floor of 90 percent recruits who are high school graduates and 60 percent scoring in AFQT Categories I–IIIA are bench-marks to which we should aspire. The levels of recruit quality that we are experiencing today still exceed those levels. What you are seeing in the press, both from senior civilian leadership and from the military, is the seasonal dance we do with Congress, but you should not be unduly troubled that recruit quality is really on a downward spiral. Our latest statistics show that recruit quality is rebounding, even for the Army. In May 1993, the Army recruited 99 percent high school diploma graduates and 69 percent scoring in AFQT Categories I–IIIA. DENOUEMENT Today, the Job Performance Measurement Project is almost complete. For years, I have said that the project was the holy grail of industrial psy-
OCR for page 1
Modeling Cost and Performance for Military Enlistment: Report of a Workshop chology—the pursuit of the ultimate criterion of job performance. I also believe that this effort and the Army's Project A have made major contributions to industrial psychology and to other professions. I receive numerous telephone calls from people outside industrial psychology who have obtained copies of the book by the Committee on the Performance of Military Personnel on performance in the workplace; they are impressed with potential applications to their professions. I also should tell you that the National Academy of Sciences' new Board on Testing and Assessment will provide counsel to a number of government organizations facing difficult and complex measurement problems. Most of these problems, in the early going, will deal with educational measurement issues, i.e., accountability, measurement of proficiency within and across schools, minimum curriculum and performance standards. I believe the work done on the Job Performance Measurement Project will assist the new board in its definition and solution of such measurement problems. It is gratifying to look at the JPM Project over the last 14 years and realize that we have been able to sustain one research project for that period of time. Research comes and goes, but this project is unique because it existed over five administrations, both Democratic and Republican. It cost a great deal of money—in the vicinity of $40 million. Despite the skepticism and, in some cases, objections by various people, the research has gone forward and now is close to fruition. In fact, I view this workshop as the beginning of its completion. APPRECIATION As I look around the room, I see several people who have been with this project from the beginning, and I feel compelled to recognize them. First are Bert Green and Sandy Wigdor, whose scientific and policy contributions have been invaluable; they always have provided sage advice. My admiration for them is certainly not a secret. Then there is Dave Armor, who did the very first job performance measurement study in 1980. Dave has been with the project as a scientist, as a policy official within the department, and now as a member of the Committee on Military Enlistment Standards. Dave has been an inspiration to me in all roles. I respect enormously his analyses as well as his policy guidance. Dave, I thank you for your help, your counsel, and your support. There also are representatives from the Services who have been with the project almost all the way—Larry Hanser and Jane Arabian, in the beginning with the Army Research Institute. Larry now is with the Rand Corporation, and I am pleased that Jane is in my office. Bill Strickland, Armstrong Laboratory, has viewed this research from several perspectives—as manpower staff officer, recruiter, and now director of human resource research for the Air Force. Jerry Laabs, Navy Personnel Research and
OCR for page 1
Modeling Cost and Performance for Military Enlistment: Report of a Workshop Development Center, also designed and conducted much of the work in measuring job performance of Navy ratings. In addition, there are the chairs of the Job Performance Measurement Working Group, two of whom are here today: Dickie Harris, Human Resources Research Organization (HumRRO), and Tom Ulrich, Defense Manpower Data Center. Chuck Curran, also with HumRRO, who was the first chair, had planned to attend, but was recently taken ill. I have spoken with Chuck several times over the last several weeks. He is feeling better, and he asked that I send his regards and tell you how much he wanted to be here to see how the project had evolved since the early days. Finally, there is my old and dear friend Dick Hoshaw—Navy manpower policy wonk and humble implementor of research. Dick has been with the project from its inception, and his vision, wise counsel, and perceptive advice have been worthy. Cost/Performance Trade-off Model Where do we plan to go with the job performance measurement model? The project's first objective was to set enlistment standards, and I still intend to do this. Tomorrow, Rod McCloy of HumRRO will tell you about efforts to use the model for setting standards as a function of quality requirements. As I indicated earlier, I do not intend to tell the Services what their standards should be. What I will do is provide a methodology and encourage the Services to use it in establishing their standards. I hope the Services will be sufficiently persuaded by the quality of the research and the validity of the model that the standards-setting process will be compelling. We also will use the model for planning and budgeting activities. As we develop the budget, we will run the model to see if budgets are realistic. If a Service is underfunded, we will ask the Service comptroller to reprogram money. We recently did this with the Navy, because of underfunding of its advertising program. If the Services are overfunded, I doubt we will try to cut the money from the budget, given the fact that the Congress is inclined to do that anyway. If Congress does try to cut the budget, we will use the model to defend our submissions. We also will do our best to persuade congressional staffers that the budget levels are reasonable and appropriate, and to try to get them to look elsewhere for cuts. I am pleased that the National Academy of Sciences is hosting this workshop. I thank Anne Mavor and Carolyn Sax for their hard work in its planning and conduct. I also thank each of you for coming and hope that you will find the workshop to be a rewarding experience.