The numerous entry point decisions that comprise the Air Force accessions process are critical to the U.S. Air Force (USAF) human capital management (HCM) system. This is the process by which volunteers from the civilian populace are assessed and selected to serve as America’s Airmen. Not all who wish to serve are qualified to do so, and not all are competent to serve. Matching the right people to the right career field and job is fundamental to a successful career that benefits both the individual Airman and the Air Force. The first part of this chapter provides an overview of the key elements of the accessions process—recruiting, selection, and classification—as they pertain to enlisted personnel and officers (see Figure 2-3 for the placement of these three focus areas within the ecosystem model). In this overview, special attention is paid to ways in which that process includes identifying and assigning individuals to highly specialized career fields, such as pilots, special operations, and cyber warfare.1 The overview was informed by relevant Air Force policy and information provided by Air Force representatives and was supplemented by the committee’s own expertise. The second part of the chapter identifies some re-
1 This discussion focuses on the active-duty force. The Air Force Reserve and the Air National Guard also recruit, select, and classify enlisted personnel and officers. Many of the processes and standards used across these components are identical at a high-level, although details will differ. For simplicity, all of the detailed examples apply to the active-duty force unless otherwise noted. Furthermore, the chapter describes the processes that generally apply to most incoming officers and enlisted personnel. Although exceptions to policy can and do occur, the committee only noted these where the exception has important implications for the substance of this report or the committee’s recommendations.
search support including operational processes and specific research areas for improving the Air Force accessions process, as further developed in this report’s Flight Plan (see Chapter 6).
This section reviews the process by which enlisted personnel are recruited, selected, and then classified into Air Force career fields and jobs. The USAF Recruiting Service (USAFRS) is charged with managing the recruiting and selection process to provide the Air Force with sufficient numbers of qualified enlisted personnel to fill requirements defined each year by the Air Force’s Deputy Chief of Staff for Manpower, Personnel, and Services (AF/A1). USAFRS and 2nd Air Force share responsibilities for ensuring that recruits are matched to appropriate Air Force Specialties (in accordance with officer and enlisted classification procedures contained in USAF Instruction 36-2101 [USAF, 2013]).
USAFRS uses recruiting groups, squadrons, flights, and offices to “inspire, engage, and recruit” young people for enlistment in the Air Force.2 One of those squadrons recruits nation-wide for applicants to serve in Air Force special warfare specialties; the remaining squadrons are geographically dispersed throughout the United States where recruiters provide local access to information on and benefits of military careers. Local recruiters arrange for interested applicants to be evaluated against Air Force standards for enlistment and job classification at a Military Entrance Processing Station (MEPS).
Several factors contribute to the definition of “qualified” for entry into the Air Force. Some of those factors are established in law and apply to all military Services (e.g., applicants under age 17 are ineligible in accordance with 10 U.S.C. § 505 3). Recruiters make preliminary determinations about an applicant’s qualifications before sending the applicant to the MEPS, but it is at the MEPS that official qualification determinations are
2 The committee met with several representatives of USAFRS during a committee meeting in Washington, DC (July 30, 2019) and in conjunction with the site visit to Joint Base San Antonio-Randolph (November 5–8, 2019). This particular quote is attributed to comments made during the USAFRS presentation to the committee on November 6, 2019 (see USAFRS, 2019a).
made in each of the following categories (in accordance with USAF Manual 36-2032 [USAF, 2019c]):
- Morals (Character/Conduct)
- Physical (including height and weight)
- Dependent Family Members
- Drug Use
As part of the qualification process, all applicants for enlistment (in all components, and in all Services) take the Armed Services Vocational Aptitude Battery (ASVAB). The ASVAB is administered primarily by computer at the MEPS. This computerized adaptive version of the ASVAB, also known as CAT-ASVAB (Sands et al., 1999), is a timed multiple-choice battery of nine aptitude subtests.4 Together, these subtests measure cognitive abilities in order to predict future success, specifically success through initial military technical training. As with any measure of general cognitive ability, ASVAB subtests demonstrate various degrees of adverse impact. Both the Office of the Secretary of Defense (OSD) and the Air Force are aware of the potential negative impacts that adverse impact can have on the diversity of the force.
Four of the ASVAB subtests—Word Knowledge, Paragraph Comprehension, Arithmetic Reasoning, and Mathematics Knowledge—form the Armed Forces Qualification Test (AFQT). This AFQT score is used to determine initial qualification. By law, applicants without a high school diploma are only eligible for enlistment if they score at or above the 31st percentile on the AFQT, and no more than 20 percent of recruits can score below that level (10 U.S.C. § 520).5,6 At the same time, OSD may direct higher minimum standards. For example, OSD has established an effective limit of 5 percent of enlisted recruits who can score below the 31st percen-
4 The ASVAB includes the following subtests: General Science; Arithmetic Reasoning; Word Knowledge; Paragraph Comprehension; Mathematics Knowledge; Electronics Information; Auto/Shop Information; Mechanical Information; and Assembling Objects. More information available at: http://www.officialasvab.com/docs/asvab_fact_sheet.pdf.
5 In FY2019, 81.8 percent of USAF active duty enlisted recruits scored into AFQT categories I–IIIA (50–99 percentile score range) and 18.2 percent were AFQT category IIIB (31–49 percentile score range); 0 percent of recruits scored below 31 percent (USAFRS, 2019b). See also relevant Title 10 law available: https://www.govinfo.gov/content/pkg/USCODE-2010-title10/pdf/USCODE-2010-title10-subtitleA-partII-chap31-sec520.pdf.
tile, regardless of their high school diploma status (DoD, 2013). Finally, the Air Force can—and does—establish higher entrance standards than those required by law or OSD. For example, the Air Force requires those without a high school diploma to score at or above the 65th percentile on the AFQT (versus the 31st percentile established in law) and generally does not enlist any recruits who score below the 36th percentile on the AFQT (versus the OSD requirement of no more than 5% below the 31st percentile).7,8
Research, development, subtest, and evaluation of the ASVAB9 are the responsibility of the Office of People Analytics, which reports to the Under Secretary of Defense for Personnel and Readiness. Actual administration of the ASVAB is done by the U.S. Military Entrance Processing Command—a joint service command staffed by and serving all six branches of the military.10 Administration takes place either at a MEPS or at a Mobile Examining Test site. Thus, the Air Force has no direct control over any aspect of ASVAB (or AFQT) research, composition, or administration except through participation in joint-service advisory groups. The Air Force can change its service-specific standards as required to meet its needs (see AFMAN 36-2032, Chapter 3 [USAF, 2019c]), but the committee did not identify any systematic or routine process for doing so. Furthermore, through its interactions with key stakeholders, the committee learned that recommendations for change resulting from a past review conducted by an external contractor met significant resistance specifically from career fields whose qualification scores were recommended to be lowered. Air Force stakeholders also expressed concern that some may misunderstand the design and validity of the ASVAB (developed specifically to predict a potential recruit’s ability to complete initial military technical training) that results in inappropriately referencing ASVAB scores later in a career to determine the relative quality of one Airman over another.11
8 Among the other Services, the minimum AFQT is 35 for the U.S. Navy, 32 for U.S. Marine Corps, and 31 for U.S. Army.
10 The six branches of the U.S. military are the Air Force, Army, Coast Guard (peacetime operations under the Department of Homeland Security; wartime operations under the Department of the Navy), Marine Corps (a component of the U.S. Navy), Navy, and Space Force (a component of the U.S. Air Force).
11 Many aspects of the ASVB (e.g., design, validity, current relevance, limitations) were discussed during several of the committee’s data-gathering sessions including those at Air Force Headquarters at the Pentagon (September 26–27, 2019), RAND Corporation (September 27, 2019), Air Education and Training Command at Joint Base San Antonio-Randolph (November 5–8, 2019), and with CMSgt Kaleth O. Wright, USAF, Chief Master Sergeant of the Air Force (November 21, 2019).
The four subtests that make up the AFQT and guide selection decisions are only a portion of the ASVAB. The Air Force relies on the entire battery of subtests to guide classification decisions, combining individual ASVAB subtest scores to produce composite scores in four aptitude areas: Mechanical (M), Administrative (A), General (G), and Electronics (E) (referred to collectively as MAGE composites).12 In the classification process, the Air Force also supplements the ASVAB with assessment tools developed either internally or in coordination with other military service branches to assess skills and predict success in areas of specific interest to the Air Force (in accordance with USAF Manual 36-2664 [USAF, 2019a]). Some of those assessments include:
- Defense Language Aptitude Battery (DLAB)
- Electronic Data Processing Test (EDPT)13
- Air Force Work Interest Navigator (AF-WIN)
- Tailored Adaptive Personality Assessment System (TAPAS)14
- Strength Aptitude Test
MAGE composites, often in combination with these other tests, are used to help classify recruits into specific Air Force specialties, referred to as Air Force Specialty Codes (AFSC).15 Each AFSC has a minimum entry requirement on at least one of the MAGE composites. Many specialties require minimum scores on more than one MAGE composite, or minimum scores
12 MAGE composite scores are comprised of the following combination of ASVAB subtest scores:
- Mechanical aptitude: General Science, Mechanical Comprehension, and Auto/Shop Information;
- Administrative aptitude: Verbal Expression (a scaled score of ASVAB subtests, Word Knowledge and Paragraph Comprehension);
- General aptitude: Arithmetic Reasoning and Verbal Expression;
- Electronics aptitude: General Science, Arithmetic Reasoning, Mathematics Knowledge, and Electronics Information.
13 The EDPT was developed in the 1960s at the request of Strategic Air Command to establish a uniform Air Force wide qualifying score for selection into computer programming training. For additional information on the development and validation of EDPT, see Leczner and Klesch, 1965.
14 TAPAS is a computer-administered non-cognitive personality assessment, originally developed for the U.S. Army, that is used to predict job performance and attrition criteria. For additional information on individual differences relevant to military service, see Rumsey and Arabian, 2014.
15 For a list of enlisted AFSCs, see https://www.af.mil/About-Us/Fact-Sheets/Display/Article/104609/enlisted-afsc-classifications/.
on one of the supplemental assessments. It was reported to the committee that ad hoc re-assessments of required MAGE scores for career field specialties are initiated by career field managers who identify career field needs and coordinate with AF/A1 and the Air Force Personnel Center (AFPC) to collect data and conduct analysis.16
While at the MEPS, a qualified applicant will be offered enlistment in either the Guaranteed Training Enlistment Program (GTEP) or in the Aptitude Index (AI) Program. At the point of enlistment, GTEP enlistees will be told the specific specialty they will be trained in at the end of Basic Military Training (BMT). AI enlistees do not know their assigned specialty until the end of BMT, but at the MEPS, they learn from which of the MAGE composites their specialty will come (e.g., Mechanical, Administrative, etc.). These classification decisions at the MEPS (i.e., GTEP and a specialty; or AI and a designated MAGE composite) are based on the applicant’s qualifications, the jobs available on the day that the applicant arrives at the MEPS, and the applicant’s preferences. The (increasing) role of individual preferences for initial classification is intended to help the Air Force compete for the most qualified of the enlisted recruits, who will also have job opportunities in the private sector among which they can exercise preferences.17
Until very recently, the standard had been that approximately 65–75 percent of applicants would be assigned a specialty (GTEP) at the MEPS. By the beginning of FY2021, the Air Force seeks to ensure 94 percent of applicants enter the service with a GTEP specialty. The change was motivated by a desire to align the number of AI enlistees available to be assigned into a job with the BMT washout rate (approximately 6%) which leaves previously assigned jobs vacant.18
Under special circumstances, the Air Force has established procedures for changing the GTEP specialty or the AI composite during BMT based on Air Force needs and trainee agreement to the change. Similarly, some Airmen who complete BMT and begin a specialty training course (e.g., initial cyberspace skills training) fail to complete that course satisfactorily. The Air Force may discharge those Airmen (e.g., about 1% of those who
16 “Roundtable Discussion to Revisit Key Force Management Considerations,” virtual committee meeting (March 18, 2020).
17 The topic of Airman preferences was discussed numerous times during the study’s data-gathering sessions including during the committee’s site visits to Air Force Headquarters at the Pentagon (September 26–27, 2019) and Air Education and Training Command at Joint Base San Antonio-Randolph (November 5–8, 2019) and during its discussion with CMSgt Kaleth O. Wright, USAF, Chief Master Sergeant of the Air Force (November 21, 2019).
18 The committee discussed this shift in strategy during its site visit to the Pentagon (September 26–27, 2019), 2nd Air Force at Keesler Air Force Base (January 9, 2020), and during the “Roundtable Discussion to Revisit Key Force Management Considerations,” a virtual committee meeting (March 18, 2020).
enter cyberspace training); however, some Airmen are offered an alternate specialty.19 The needs of the Air Force at the time, the Airman’s qualifications, and the Airman’s desires all interact in that decision. Furthermore, recent updates and improvements to AF-WIN allow active duty Airmen to identify AFSCs that best fit their strengths and interests, in order to meet retraining goals and identify potential AFSC changes for those who may desire or require a career path other than their initial AFSC.20
Classification into Special Warfare Specialties
Among the training courses for enlisted specialties in the Air Force, a very few have exceptionally high rates of attrition. These courses all involve training for special warfare specialties (e.g., pararescue, combat control team, tactical air control party, special reconnaissance, explosive ordnance disposal).21 In an attempt to reduce training attrition, the Air Force has implemented additional screening programs for applicants who desire to be classified into those specialties. In addition to meeting minimum scores on the ASVAB, applicants must also pass the Physical Ability and Stamina Test (PAST)22 and must take the TAPAS. Recruits who are identified as special warfare candidates are assigned to dedicated BMT flights that require more rigorous physical fitness, and the Air Force has recently implemented an 8-week preparatory/screening course that all special warfare applicants attend between graduation from BMT and starting their specialty-specific technical school.23
19 For example, during the committee site visit to Keesler Air Force Base, the committee learned that among initial cyberspace skills trainees, about 5 percent fail to complete and are completely removed from the career field, with about 4 percent of those being for academic/aptitude reasons that permit re-assignment and another 1 percent for reasons that result in discharge from the Air Force.
21 For example, in 2017, combat control team selection course attrition was 68 percent and the qualification course 43 percent (among an initial candidate pool of 134). Even higher attrition rates are found in the special operations weather (recently renamed as special reconnaissance) selection course at 82 percent and 67 percent in the qualification course (among an initial candidate pool of 35) (Pavelko, 2017).
23 The preparatory course addresses candidates’ behavior patterns, nutrition, resiliency, and physical fitness aimed at improving their chances of success in special warfare selection and assessment tests. This information was provided during the committee site visit to Joint Base San Antonio-Randolph (November 5–8, 2019), where the committee met with numerous representatives who provided information and perspectives on USAF Special Warfare recruiting and training (November 6, 2019).
This section reviews the entry decision points that make up the accessions process wherein officers are recruited, selected, and classified. Most USAF officers are commissioned through one of three sources: the Officer Training School (OTS), the Air Force Reserve Officer Training Corps (AFROTC), and the U.S. Air Force Academy (USAFA).24,25 As a result, officer recruiting, selection, and classification programs have more diverse entry points than comparable programs for enlisted personnel.
USAFRS manages recruiting for OTS, and it assists in recruiting for AFROTC and USAFA. AFROTC and USAFA each have a small cadre of full-time recruiters, along with part-time Air Force Reserve officers who assist in recruiting and interviewing applicants in their local communities. There is an important distinction in the applicant populations for these programs: OTS recruits college graduates for training and commissioning; AFROTC and USAFA recruit high school graduates eligible for college scholarships (2- and 4-year scholarships available through AFROTC) or a service academy education (USAFA).26
All applicants for commissioning must meet minimum standards that parallel those used for enlisted personnel (e.g., citizenship, moral character, age, education) (in accordance with USAF Manual 36-2032 [USAF, 2019c]). As with standards for enlistment, some of those officer standards
24 AFROTC and OTS combined provide approximately 80–85 percent of Air Force officers, and USAFA provides the remaining estimated 20 percent (committee site visits to Keesler Air Force Base in Biloxi, MS, January 9, 2020, and USAFA in Colorado Springs, CO, January 17, 2020).
25 The discussion in this section addresses only line officers in the active duty Air Force. Medical officers, legal officers, and chaplains have unique processes for recruiting and selection that are beyond the scope of this study. Additionally, although outside the scope of this study, the National Defense Authorization Act of 2016 established a pilot program (through December 31, 2022) for direct commissions (lateral entry) of qualified cyber professionals (see National Defense Authorization Act for Fiscal Year 2017, Title 5, §508, available: https://www.congress.gov/bill/114th-congress/senate-bill/2943/text).
26 The committee met with several representatives of USAFRS during a committee meeting in Washington, DC (July 30, 2019) and in conjunction with the site visit to Joint Base San Antonio-Randolph (November 5–8, 2019); with representatives of OTS and AFROTC (and Junior ROTC and Civil Air Patrol) during the site visit to Air University at Maxwell Air Force Base in Montgomery, AL (January 15, 2020); and with representatives of USAFA admissions during the site visit to USAFA in Colorado Springs, CO (January 17, 2020).
are specified in law, while others are established by the Air Force. Beyond those common entrance standards, however, the selection systems for the various commissioning sources differ substantially, and there is no AFQT-like standard for all officers selected.
USAFRS manages the selection system for OTS. Applicants for OTS must take the Air Force Officer Qualifying Test (AFOQT),27 and USAFRS uses boards of officers to select candidates based on these scores, college transcripts, and recommendations and interviews.28
AFROTC directly manages its scholarship selection program for primarily targeting graduating high school seniors,29 using high school transcripts, college entrance exam scores, interview results, and high school activities record. AFROTC also manages a selection program for entry into its Professional Officer Course, available to those who are already enrolled in college; selection criteria include college record, performance in AFROTC courses and field training, AFOQT scores, and recommendations. AFROTC uses an algorithm (an “automated selection board”) to combine the selection components into a single score that rank-orders candidates.30
USAFA manages its selection system much like any selective undergraduate institution, considering the same components that AFROTC uses: high school transcripts, college entrance exam scores, interview results, and high school activities record. To qualify for admission, applicants must also complete the Candidate Fitness Assessment.31 USAFA also uses an algorithm to combine these components into a single selection composite. However, the USAFA admissions system is complicated by the need for candidates to obtain a personal nomination from at least one source,32 and that candidates may only be admitted if they are nominated. Most of the nominations come from Members of Congress; each Member of Congress can nominate up to 10 candidates for each vacancy that Member has available. (Members of Congress are authorized to have up to 5 cadets at USAFA simultaneously; vacancies can occur either through attrition or
27 This standardized test is similar to the college admission qualifying test, the Scholastic Aptitude Test (SAT) developed by the College Board. It tests for verbal and mathematical and additional aptitudes relevant to the needs of the Air Force.
28 The committee met with representatives of OTS and AFROTC (and Junior ROTC and Civil Air Patrol) during the site visit to Air University at Maxwell Air Force Base (January 15, 2020).
29 A general education development certificate (GED) may be used in place of a high school diploma.
30 The committee met with representatives of AFROTC during the site visit to Air University at Maxwell Air Force Base (January 15, 2020).
32 Title 10 of the U.S. Code establishes congressional nominations and military service connected nominations. Nominations for non-military candidates may come from a member of the U.S. House of Representatives, U.S. Senator, or from the Vice President of the United States. Presidential nominations are reserved for candidates whose parents are service connected.
graduation.) To ensure representation from across the United States in the Academy population, USAFA must admit 1 of those 10 nominees as long as that person meets minimum qualifications, even if another Member’s nominees are better qualified for admission.
As with recruiting and selection, officer classification varies across the commissioning sources. At the time of selection, candidates selected for OTS are classified as rated,33 technical, or non-technical, based primarily on candidate preference and college degree. Within those categories, the actual specialty is assigned by AFPC based on the officer candidate’s degree and the Air Force’s specific needs on the candidate’s expected graduation date. Because AFROTC and USAFA are long lead-time sources of commission (i.e., those selected for AFROTC or USAFA will not receive their commissions as officers until graduation 4 years later), classification into a specific specialty does not occur at the time of selection. As they approach graduation, the Air Force uses an algorithm to prioritize specialties for each AFROTC and USAFA cadet. The algorithm, originally developed by the RAND Corporation, in collaboration with AFROTC and USAFA, is managed by AFPC. It is an integer program (see Appendix D) that includes the following factors as inputs, in accordance with Air Force military personnel classification instructions (USAF, 2013):
- Air Force requirements
- College degree
- Physical qualifications
- Class rank
- Commander’s recommendation
- Cadet preferences34
According to discussions with representatives from AFROTC, USAFA, and the analysis branch of AFPC (which serves as the unbiased independent
33 The “Rated” category includes pilot, combat systems officer, airborne battle manager, and remotely piloted aircraft pilot specialties. For additional information, see AFI 11-402 (13 December 2010), Chapter 2, available: https://static.e-publishing.af.mil/production/1/aetc/publication/afi11-402_aetcsup_i/afi11-402_aetcsup_i.pdf.
34 The Air Force has recently taken steps to improve and expand its capability to collect and consider cadet preferences for initial classification (and at later career points for subsequent job assignments). The committee offers additional input to better understand and apply Airmen preferences into matching algorithms in Appendix D.
analytic cell that runs the model based on provided guidance and data), officer classifications are made according to the following priority ordering in matching officers to available specialties:
- Mandatory degree requirements and Air Force assignment target levels;
- Merit distribution based on class rank;
- Source of commission;
- Desired/preferred education requirements; and
- Cadet preference.
However, the descriptions provided to the committee are not in sufficient detail to give precise information about how the algorithm makes assignments (see Appendix D).
Classification into Rated Officer Specialties
Training for rated officer specialties is intense and expensive; as a result, minimizing attrition in training is important, and graduates from training must serve longer in the Air Force than graduates from other officer training courses (10 years following completion of undergraduate pilot training rather than the typical commitment of 4 to 6 years, depending on career field and commission source) (USAF, 2018). Classification into these specialties is highly competitive. For example, the Air Force requires minimum scores on appropriate AFOQT composites (AFROTC and OTS), and medical standards are more stringent for these specialties (see Appendix C for more details on pathways and requirements to become a rated pilot). Some officers fail to graduate from these training courses for various reasons (e.g., loss of medical qualification, issues with anxiety or psychomotor skills, academic failure). However, unless they have medical issues that prevent further service, these officers will be re-classified into a technical or non-technical specialty, based on their degree, their preferences, and Air Force needs.
To ensure that Air Force entry point decisions are valid and evolve appropriately to meet strategic priorities and reflect new capabilities, accessions processes and assessments are informed, designed, and validated by internally and externally conducted research efforts. Research is needed to sustain the current HCM system’s performance, ensure continuous incremental improvements keep the system in line with best practices (Miller, 2019), and innovate new processes and assessments (Murphy, 2019; Yusko,
2019) for competitive advantage in selecting and assigning the best possible Airmen. Over the course of the study, the committee met with the offices responsible for Air Force accessions—specifically recruiting, selection, and classification processes—and asked each about the rationale behind existing accessions processes and assessments. The committee also asked about research being conducted or needed to evaluate the existing system’s effectiveness and efficiency (especially with regards to entry and classification standards, and the process to obtain waivers to standards) and to improve the system to take advantage of recent and future advances in science and technology.
Additionally, the committee asked about the Air Force’s system for collecting and analyzing information about Air Force specialties, and about the criteria used to evaluate the system’s effectiveness. Good practice in HCM requires that at a minimum the rationale and assumptions behind decisions made in all of these areas be documented and routinely evaluated to validate processes and inform/update selection algorithms. An ideal system would incorporate data-driven decisions into policies in these areas and would subject these policies to routine periodic evaluation.35
In this section, the committee first gives an overview of the current Air Force research support system and then discusses the need for a human capital data superstructure to enable critical data collection, storage, and sharing across the USAF HCM system. The rest of this section is devoted to a detailed discussion of research needs in the following areas:
- Data Infrastructure for Research and Decision-Making;
- Job and Occupational Analysis;
- Selection and Classification Outcomes and Criterion Development;
- Entrance Standards;
- Classification Standards; and
- Waivers to Standards.
Based on the committee’s major findings or conclusions presented in boldface throughout the remainder of this chapter, these subsections describe research needed to improve Air Force accessions and inform imple-
35 For example, in June 2020, as this report was being finalized, the Air Force Inspector General initiated a data-driven study to identify racial biases in the service’s discipline and professional development system. “The review will assess and capture existing racial disparities, assess Air Force-specific causal factors, like culture and policies, assimilate the analysis and conclusions of previous racial disparity studies by external organizations and make concrete recommendations resulting in impactful and lasting change.” For more information, see https://www.af.mil/News/Article-Display/Article/2213635/air-force-ig-directed-toindependently-examine-racial-disparities-in-services-d/.
mentation of the actions called for in this report’s Flight Plan given in the final chapter.
Current Research Support
The Air Force currently uses multiple internal and external organizations to conduct research on its accession system. At AFPC and Air Education and Training Command (AETC), there are in-house staff with doctoral degrees in industrial and organizational psychology and other relevant areas. These staff members have missions that include conducting research and analyses on parts of the overall HCM system. For example, the AETC Studies and Analysis Squadron includes three divisions that address occupational analysis, Airmen advancement, and force development analytics and assessments to impact Air Force-wide missions such as training, promotions, culture, career field development, and career field matching (Dillenburger et al., 2019).
Although the Air Force Research Laboratory no longer has a designated research stream supporting manpower and personnel research, it makes staff available for consultation and allows the personnel community to use its contract vehicles. In addition to meeting with each of those organizations,36 the committee also met with staff at the Air University and at USAFA who possess many of the skills required to perform research and analyses in this area. To extend its in-house capability, the Air Force also uses independent contractors and the RAND Corporation’s Project Air Force, a federally funded research and development center.37 During the committee’s site visits and plenary sessions, the committee met with many of these research support system organizations and asked representatives questions such as:
- Are the individual studies undertaken by these organizations part of a systematic program of research?
- How are these efforts coordinated?
- What processes are used to minimize duplication of effort?
- What processes are available for developing and implementing recommendations that might result from studies or programs of studies?
36 The committee regrets that key representatives from the USAF School of Aerospace Medicine declined to brief the committee on the full spectrum of their human capital research activities during the committee’s site visit to Wright-Patterson Air Force Base (November 12–13, 2019) or in response to subsequent invitations.
37 The committee met with representatives of the RAND Corporation’s Project Air Force in September 27, 2019, receiving information regarding recently completed and ongoing studies related to recruitment, accessions, classification and reclassification, retention, and Airman attributes.
- Is there a strategic view for long-term support (e.g., staff, resources, availability of Airmen) for the overall system?
These questions are not new. RAND has previously reported on the human capital research support available to the Air Force and was critical of the uncoordinated research efforts that were being undertaken at the time (Sims et al., 2014). Years later, while addressing the attrition problem in special warfare training courses, RAND concluded, “Part of the challenge to addressing attrition has been the lack of integration in efforts within and on behalf of the Air Force” (Lytell et al., 2018, p. 18). It would promote better coordination and alignment of research efforts across the USAF HCM system if the Air Force raised these questions regularly and repeatedly across all its research initiatives—including initiatives beyond initial selection and classification processes such as clinical, operational, leadership, or teamwork studies (e.g., those conducted by units of the 711th Human Performance Wing). In turn, better coordination and alignment of research efforts could significantly improve the design, implementation, and results dissemination of those efforts into the human capital ecosystem. These questions are also important to consider in relation to each of the research areas described later in this section, since coordinated and integrated efforts in each area will be critical for efficient and effective research program design and implementation.
Data Infrastructure for Research and Decision-Making
Across an individual Airman’s career, there are many points at which the Air Force makes decisions about that individual (e.g., at the point of accession and for all the years an Airman remains in the service post-accession). The amount of data and information that inform these decisions depend upon where in the career path the individual is at the time of the decision, as well as their enlisted or officer AFSC. Initially, decisions may be based on an AFQT score, a MAGE composite, PAST, or TAPAS at the enlisted level, or AFOQT scores for officers. Further downstream, decisions are often made using more subjective and qualitative information such as performance ratings, and commanding officer recommendations.
These kinds of personnel decisions occur daily throughout the Air Force, but they are often not as well-informed as they could be (as described in Chapter 3, being “accurately informed and informative” is a key attribute of an ideal system). Across every data-gathering session and site visit the committee made, the need for a comprehensive data infrastructure was consistently repeated: it is needed to support the analysis that guides the enterprise, reduce time to decision, increase the effectiveness of decisions, and increase the coordination of the enterprise operations. In some cases, the data are available but may not be known or accessible to those who would
want to use them. One specific problem reported to the committee was the challenge of making important data accessible across the Air Force when collection methods do not enable digital sharing (e.g., the committee was told about the time-consuming and costly effort required to digitize 11,000 lines of handwritten evaluation data generated by Air Force formal training units that also needed to be normalized and cleaned38). Furthermore, representatives from AFPC reported minimal data literacy across their own organization and an absence of a data inventory, catalog, or management process—all of which is inefficient and detrimental to their goal to become a data-driven organization. To modernize Air Force HCM, especially by leveraging emerging capabilities such as machine learning, the Air Force will need to be ready with large amounts of clean and accessible data to train, develop, and validate new algorithms that could impact recruitment, selection, assignment, and other human capital functions. This sort of capability will enable the Air Force to more accurately model the readiness of the force today and develop future forecasts.39 For example, vast amounts of data are collected about Airmen who are selected for and enter special operations training programs, and the Air Force needs a way to access that information—especially for those who attrit—to help those Airmen (and the Air Force) understand where they might be a better fit.40 The Air Force Special Operations Command has taken the first steps to leverage 30 years of data to develop artificial intelligence programs to assess successful attributes to improve its recruitment and selection processes (Myers, 2020).
Consequently, there is need for a data superstructure that would allow those making human capital decisions, regardless of where or when they occur in the Air Force, to have access to appropriate history of the individual in question, including, for example, test scores and performance ratings in concert with “fit indices” of past positions. It is critical that such a data superstructure be initiated at the point that accession personnel decisions are made in order to provide an accurate and thorough composite of the individual across a career trajectory. Such a system would also create a meaningful opportunity for later analysis about career success and other personnel decisions to feed into improvements in selection and classification processes (Miller, 2019).
38 This information was provided by Dr. Jerry R. Coats during the committee site visit to Joint Base San Antonio-Randolph (November 5–8, 2019).
39 During the committee site visit to Joint Base San Antonio-Randolph (November 5–8, 2019) Maj. Gen. Andrew Toth questioned the current information technology limitations preventing the Air Force from better leveraging historical data to understand the promotions and career paths that would lead to the force of the future.
40 This information was provided by representatives of USAF special warfare recruiting and training during the committee site visit to Joint Base San Antonio-Randolph (November 5–8, 2019).
Two recent initiatives have begun to capture some of these data and could provide important initial input for a complete human capital data superstructure. First, AFPC’s Enlisted Applicant Master File tracks individual Airmen from applicant status (having taken the ASVAB) through their first Air Force specialty awarding course.41 Second, AETC’s comprehensive and complete Airman Learning Record documents all learning (including specialized, on the job, and off duty training) over the course of an Airman’s career (Roberson and Stafford, 2017).
Through discussions with key Air Force stakeholders, it was noted that any initiative to create centralized capability to collect and store longitudinal data should be informed by careful consideration of the data and collection methods to be included. For example, metadata and other validity information may be essential to ensure that data collected at a single point in time for an explicit validated purpose (e.g., ASVAB score for entry into service and as a predictor or success in technical training) is only applied to subsequent personnel decisions if appropriate (e.g., not judging a mid-career Airman to be superior or inferior to another based on comparative ASVAB scores collected many years prior).42
Ultimately, the value of implementing a human capital data superstructure will be realized across several areas. First, it would make access to data possible without having to engage in a time-consuming, human labor-intensive process to collect or clean data that may have already been collected or cleaned. Second, centralization would make data available to the offices that need the data on a faster basis: rather than having to request a data extract from one or more systems, wait for the data to arrive, and then clean and input the data into a local system, the data could be accessed immediately. Further, access of data through a centralized process is more easily monitored and audited, providing for more effective security insight into data usage. Finally, centralization will provide the support for research and analysis that can identify opportunities for efficiencies that result in systemic cost-savings.
Job and Occupational Analysis
As shown in the ecosystem model in Chapter 2 (see Figure 2-2), achieving good person-job fit is a keystone element of a functional HCM system (as described in Chapter 3). Key to enabling this are accurate and effective
41 The program is administered by AFPC’s Strategic Research and Assessment Division (AFPC/DSYX), and information was provided to the committee through electronic correspondence with Dr. Johnny J. Weissmuller, senior personnel research psychologist (April 2020).
42 “Roundtable Discussion to Revisit Key Force Management Considerations,” committee meeting, Washington, DC, March 18, 2020.
job and occupational analyses (Miller, 2019; Murphy, 2019) and subsequent competency modeling efforts. They are essential to understanding critical workforce competencies, requirements for success, insights regarding personnel skill shortages, and future force needs. In fact, according to professional standards, any recruiting, selection, or classification system development begins with an analysis of each job/specialty that is being filled by that system (SIOP, 2018). The occupational structure of the Air Force is defined in the Air Force Enlisted Classification Directory and the Air Force Officer Classification Directory. These publications include descriptions of each specialty in the Air Force, including the minimum standards for entry. In addition, the Air Force routinely conducts occupational surveys of specialties, primarily for use in developing training for those specialties. As discussed below, beyond their current uses, the results of job/occupational analyses have additional potential applications to inform and improve Air Force entry point decisions.
Job/occupational analysis information is essential for the development, refinement, and evaluation of assessments used for selection and classification: it is used to determine what should be assessed, how it should be assessed, what gaps exist in current processes, and how to evaluate whether those processes are having the desired effects. Hence, research in support of improving selection and classification requires a high level of attention to job/occupational analysis. For example, over the course of the study, Air Force representatives questioned whether the ASVAB was sufficient as a tool for predicting success in today’s Air Force, and more importantly, tomorrow’s Air Force.43 The ASVAB does indeed have grounding in job analytic work and is supported by validation evidence showing connections of scores to success in training and on the job (Ree and Earles, 1991), and the committee supports its continued use as a selection tool. However, the underlying questions are important ones to continually raise regarding selection and classification, especially now with the creation of new jobs to support the recently established U.S. Space Force within the Department of the Air Force:
- How are jobs/occupations changing?
- Is there emphasis on the critical aspects of the job in classification decisions?
43 The topic of ASVAB validity was raised during committee data-gathering sessions with Air Force Headquarters at the Pentagon (September 26–27, 2019), Air Education and Training Command at Joint Base San Antonio-Randolph (November 5–8, 2019), and with CMSgt Kaleth O. Wright, USAF, Chief Master Sergeant of the Air Force (November 21, 2019).
- Are the routine occupational surveys conducted by the Air Force that regularly update training also sufficiently integrated into updating selection and classification systems?
Furthermore, prediction equations (and the ability to produce effective matches between individuals and jobs) require valid information not only for the predictors but also for the criterion measures (e.g., job performance). To appropriately account for changes in Air Force occupations, especially changes driven by accelerated technology innovation or changing demands to ensure air superiority in the joint environment, required knowledge, skills, abilities, and other characteristics (KSAOs) and performance rating criteria need to be updated regularly. Although there is value in traditional approaches to analysis of the job-specific KSAOs and other requirements to ensure proper selection, classification, training, and performance management, many successful organizations, including the Air Force, have shifted to a greater reliance on competency models developed from job/occupational analysis.
Current Air Force Competency Modeling
Air Force doctrine defines competencies as “attributes an individual possesses to successfully and consistently perform a given task, under specified conditions, or meet a defined standard of performance” (USAF, 2015a, p. 1-1). In general, competencies are combinations or categories of behaviors necessary to perform a job effectively, rooted in knowledge, skills, and abilities (Campion et al., 2019; Roland, 2019). Such an approach allows for greater agility in identifying the requirements of emerging jobs and enhances the identification of individuals with the best fit for those positions. Competency models can be used to effectively translate an organization’s strategy into the behavior of members in an iterative fashion (Campion et al., 2019). For the Air Force, this means a competency model can reflect changes reported by Airmen doing the job as well as strategic priorities pushed from the top down (as described in Chapter 3, being “mission responsive” is a key attribute of an ideal system).
Recently, offices within the Air Force began efforts to develop competency models for the Air Force in general, and for some specific AFSCs. Guidance on developing competency models was released by the Air Force in April 2019 (USAF, 2019b), and the committee was told about a growing use of competency models in various ways throughout the Air Force. Instances include a framework that involves 22 foundational competencies (see Appendix E) (Barelka et al., 2019; Barron, 2019; Coggins, 2019); a 12-competency framework of qualities for application to the officer evaluation system (Campbell, 2019); a competency model used in developing
training curriculum (Bennett, 2019); occupational competencies for certain career fields;44 and a competency framework used in selection decisionmaking for high-risk positions (Picano et al., 2019; Roland, 2019). As the committee consulted with multiple stakeholders during site visits to Air Force installations across the United States, the committee found little alignment between competency models developed and used by different groups across the Air Force, which results in discrepancies and redundancies between competencies identified, a point that has been noted in prior reviews of the USAF HCM system.45 The committee heard individuals express hope that those engaged in competency modeling efforts will follow AETC’s process for developing and validating competencies. Specifically, stakeholders recognized that it would be ideal to use those competencies to design and inform entry point decisions (as well as later career decision points such as promotion, performance management, and training), but the committee found no evidence that this is occurring. In fact, there did not appear to be a clear mandate to leverage the many varied uses of competency modeling, nor any clear integration of efforts into an overarching human capital strategy. Because job/occupational analysis plays a core role in the design of selection, classification, training, and performance management systems (Miller, 2019; Murphy, 2019; Roland, 2019), the lack of a single unified approach to competency modeling hinders the effectiveness and efficiency of each separate effort and of the system as a whole. Competency models across the Air Force that are not aligned in labeling, scope, specificity, associated levels of proficiency, or other aspects, are problematic for multiple reasons. Specifically, the committee identified two key problems:
- Discrepancies in competency definition and operationalization risk confusion and distrust; and
- Redundancies result from unclear relationships of models developed for a single purpose as well as those developed for different, but inter-related, purposes.
44 The committee initially learned about the Air Force’s recent research initiatives to identify occupational competencies during discussions with Air Force Headquarters staff at the Pentagon (September 26–27, 2019), and specific details about civil engineering occupational competencies were provided by Col. Don Ohlemacher, USAF, during the committee’s site visit to the Air Force Institute of Technology, Wright-Patterson Air Force Base, Dayton, OH (November 13, 2019).
45 Job analysis data are collected and/or competency research conducted by AETC’s Occupational Analysis Division, Air Force Manpower Analysis Agency, AFPC, some AF/A1 divisions (including Force Development [AF/A1D] and Military Force Management Policy [AF/A1P]), and the Airmen Systems Directorate (RH) of the 711th Human Performance Wing (see Table 4.1 of Sims et al., 2014 for further details).
First, a given competency (e.g., interpersonal skill) may be defined and operationalized differently for different human capital activities and decision points, potentially resulting in confusion and distrust of tools developed. For example, the 12-competency framework for use in the officer evaluation system has several overlapping competencies with the framework of 22 foundational competencies. Each framework gives these overlapping competencies different labels and definitions. The reverse is also true: as occupational competencies are defined by career field, different career fields may use the same label for a competency but define it differently (Barron, 2019). For selection and classification purposes, this can be problematic: for example, an Airman might be selected for a critical occupation based on possession of competency X, but the definition of competency X used for selecting is different from the definition of competency X used in evaluating success on the job. This misalignment can contribute to poor selection and classification decisions that result in poor person-job fit, performance challenges, and, ultimately, Airman dissatisfaction and attrition.
Second, problems arise due to a diffusion of purpose. Competency modeling efforts typically recognize that the level of detail may differ for different purposes (e.g., for training versus selection), but most also classify competencies as universal competencies (e.g., what is required for all Airmen in all jobs), functional competencies (e.g., what is required of all maintainers), and job-specific competencies (e.g., what is required to maintain a specific aircraft system). This type of layered consideration of competencies underpins the Air Force approach to competency modeling in training applications where competencies are defined as fundamental competencies (for training before field unit); initial competencies (for field unit to combat readiness); and mission-essential competencies (required for combat operations). These competencies are used in the assessment of the need for training as well as in the development of training content and the assessment of the success of that training (Bennett, 2019). However, it was unclear whether the use of different models was consistent within a single purpose (e.g., only one model for developing assessments), whether some models were designed and used across various human capital activities, or where accountability for competency modeling efforts resides.46Ultimately, this results in redundancies in research and the potential for adoption of competency models that are inappropriate for their planned purposes and/or that establish inconsistent competency expectations across a career trajectory—making it difficult or impossible to achieve them through systematic selection, training, and other career development efforts.
46 This information was provided during the committee site visit to Joint Base San Antonio-Randolph (November 5–8, 2019).
Need for Integrated and Aligned Competency Models
To ensure the Air Force realizes the maximum benefit of its investments in job/occupational analyses and competency modeling, there are two clear steps to be taken to integrate and align current competency models as well as to prepare for the future as efforts mature. First, the various competency modeling efforts across the Air Force could benefit from being analyzed and evaluated regularly to identify discrepancies and redundancies. Without regular system-wide comparative evaluation, with clearly assigned responsibility for doing so, situations like those mentioned above are likely to arise. Specifically, discrepancies in competency definitions between, for example, selection and classification versus performance management, can result in unidentified or misidentified talent that results in poor performance or attrition. Furthermore, creating better alignment of competency models within the Air Force would result in better communication with Airmen regarding what it takes to be successful in career paths.
Second, better aligned job/occupational analysis and competency modeling efforts are necessary for the USAF HCM system as a whole to benefit from new research. Currently, there are indications that the lack of an overarching approach to job analysis and competency modeling leaves some Air Force research efforts untethered, potentially ungrounded in job requirements, unconnected to important work outcomes, and/or unsupported and not maintained over time. For example, the Air Force has pockets of research on alternative and emerging assessment methods on ways to evaluate resilience,47 but no consistent definition of resilience (Landers, 2019); no indication of whether resilience is a universal competency or whether it has occupation-specific manifestations; and no indication of what levels of proficiency might be required for different purposes. Although multiple streams of research effort have value, the lack of clear connection to a broader framework means the research’s applicability to a larger swath of jobs and occupations, especially for manpower planning, is not clear, and the research can remain generally unknown, inaccessible, or designed in ways that do not promote broader use.
The lack of consistency in competency modeling affects manpower planning, and the committee heard about multiple examples at several of its site visits. For example, there are indications that training in cyber is currently concentrated at the task level rather than emphasizing structural
47 The topic of “resilience” was discussed with USAF representatives during site visits to Joint Base San Antonio-Randolph (November 5–8, 2019), Wright-Patterson Air Force Base, Dayton, OH (November 13, 2019), and the U.S. Air Force Academy, Colorado Springs, CO (January 17, 2020).
competencies.48 While this may be a very appropriate and necessary level of detail, a focused approach to training for a certain task has the potential to lead to structural obsolescence as technology changes (as it rapidly does in the cyber world) (Hernandez, 2019). At some point, aligning such training with competencies that connect to other human capital systems, particularly selection and classification, might help with supply and demand concerns in this area. Another example is the identification of humility and self-awareness as competencies important for success operating in a joint environment, but which are not included in the 22 Air Force foundational competencies.49 One last example was raised several times over the course of the study by representatives of the Air Force who questioned the appropriateness of doctrine descriptions of enlisted and non-commissioned officer responsibilities (and the resulting training and developmental pipelines), largely reflecting Air Force needs in the mid-20th century (emphasizing limited trade skills to be developed and utilized over 4 years of enlisted service and non-commissioned officer leadership roles to promote morale, health, and welfare of the force), given realistic current job actualities and future force needs that are increasingly technical and specialized over longer periods of service. This issue was specifically raised to the committee by representatives of Air University as a critical data and analysis gap that warrants an occupational study of the entire enlisted force to redefine 21st century job descriptions and restructure force development50 (discussed in Chapter 5). These examples point to how the lack of a consistent framework undermines efforts to keep pace with a changing operational and educational environment, which in turn affects the supply of and demand for certain competencies (as described in Chapter 3, being “accurately informed and informative” and “collaborative” are key attributes of an ideal system).
Finally, an additional point deserving mention is that the Air Force has a keen awareness of the need to identify future skill, aptitude, and knowledge needs, as the nature of warfare changes, including future needs associated with the stand-up of the U.S. Space Force.51 In identifying these needs, it is also important to consider the Air Force mission within the
48 Information provided to the committee by Lt. Col. Andrew Miller, USAF, and Lt. Col. Jill Heliker, USAF, during site visit to Keesler Air Force Base, Biloxi, MS (January 9, 2020).
49 Information provided in briefing, “Airmen Attributes for Recruiting & Accessions,” during committee site visit to RAND Corporation’s Project Air Force, Arlington, VA (September 27, 2019).
50 Information primarily provided to the committee during site visit to Maxwell Air Force Base, Montgomery, AL (January 15, 2020). This topic also came up during committee site visit to Keesler Air Force Base in Biloxi, MS (January 9, 2020) and with CMSgt Kaleth O. Wright, USAF, Chief Master Sergeant of the Air Force (November 21, 2019).
51 Information provided during committee data-gathering sessions with Air Force Headquarters at the Pentagon (September 26–27, 2019).
context of the larger national security strategy (DoD, 2018) and the current and future expectations of Airmen in order to ensure air superiority in the all-domain battle of a joint team fight.52 General Brown’s vision of the Air Force of the future is instructive here: “The United States needs an Air Force that can fly, fight, and win in the air domain as a member of the joint team . . . it is easy to forget that the Joint Force loses without access to the air and the ability to deny that access to [American] enemies” (Brown, 2020, pp. 11–12).
The Air Force is already considering changing competency requirements in fields such as pilots (e.g., moving away from mechanical skills and physics knowledge and toward systems-thinking requirements and multi-tasking skills)53 and maintainers (move to greater analytics skills)54 (Atkins, 2020). However, if efforts to capture emerging needs and future job/occupational analysis are not aligned with other human capital systems (e.g., current training systems), there will be mismatches between what skills Airmen are acquiring, what career fields they are pursuing, what efforts are being rewarded, etc. Oversight of competency modeling, including clear designation of accountabilities, at an Air Force-wide level will create the means of aligning future workforce planning with current accessions (and other human capital processes) and will also allow for greater agility in setting up needed research and infrastructure for understanding critical competencies (e.g., cybersecurity knowledge, resilience) in a manner that reflects the future of the force. Further, the committee learned of situations where the Air Force has found itself misaligned on particular critical skills (e.g., cyber warriors) and could only be reactive in addressing the misalignment. Aligned competency models can help track Total Force capacities in different areas, allowing for greater proactivity in recognizing talent misalignment and adapting training quickly.
Ultimately, the committee did not attempt to decide the skills, aptitudes, and knowledge future Airmen will need or the standards or minimum qualifications for classification into particular careers; this falls to the Air Force to determine based on strategy, potential futures, job analysis, and
52 The Air Force Strategic Plan specifically identifies that “The Air Force needs capability options to execute missions in support of national defense and joint and combined operations under a wide array of contingencies. These capabilities must be responsive to changing needs” (USAF, 2015b, p. 14).
53 Information provided during committee data-gathering session, “Perspectives on Pilot Career Field” with Maj. Gen. James A. Jacobson, USAF, and Col. Timothy L. Hyer, USAF, at Air Force Headquarters at the Pentagon (September 26–27, 2019).
54 Information provided during committee data-gathering session, “Perspectives on Maintenance Career Field” with Lt. Col. Jennifer L. Gurganus, USAF, CMSgt John W. Jordan, USAF, and CMSgt Robert W. Rafferty II, USAF, at Air Force Headquarters at the Pentagon (September 26–27, 2019).
competency modeling, etc. However, as this section explains, without better integration and designated oversight of all job analysis and competency modeling efforts, the Air Force may be creating misalignment of human capital systems, creating misunderstanding and confusion regarding what competencies mean and what are the requirements for success, missing out on potential insights regarding personnel skill shortages, and inadequately identifying future force needs. As shown in the ecosystem model (see Figure 2-2 in Chapter 2), research into the future needs of the Air Force affects who, how many, and what type of personnel are recruited into the service, all of which have long-term implications on the overall Air Force competency and readiness to successfully meet its mission. Developing accurate projections of future needs is the key to having the Total Force that is needed today and in 10 and 20 years into the future (as described in Chapter 3, being “mission responsive” is a key attribute of an ideal system).
Selection and Classification Outcomes and Criterion Development
Fundamental to the decision to set standards for selection and classification is the purpose for which those standards exist. Throughout discussions with Air Force personnel during this study, the committee heard that the Air Force makes entry point decisions (through selection and classification) to positively impact several individual Airman outcomes, such as:
- how an Airman will perform in training;
- whether an Airman will complete training;
- how an Airman will ultimately perform in their AFSC;
- whether an Airman will complete their enlistment term or active duty service commitment (ADSC); and
- whether an Airman will re-enlist or remain with the Air Force after their ADSC.
The Air Force is not alone in valuing such performance- and retention-related outcomes, as they are common in the private sector as well (e.g., Miller, 2019; Murphy, 2019). Indeed, a review of professional standards for validating personnel decisions has the presumption that the primary outcomes of interest are performance- or retention-related (AERA, APA, and NCME, 2014; SIOP, 2018). Note that all of these outcomes reflect outcomes for individual Airmen, and measures of them can be used as criteria when evaluating Air Force selection and classification processes as described in the professional standards cited above. Though the focus here is on outcomes for individual Airmen, the idea is that if the Air Force designs its selection and classification processes to positively impact Airmen outcomes, those Airmen will in turn positively impact the performance of
Air Force units and the Air Force’s accomplishment of its strategic goals and mission (as described in Chapter 3, being “mission responsive” is a key attribute of an ideal system).
What was not clear from the committee’s discussions with Air Force personnel during this study is the relative importance the Air Force places on the outcomes above when evaluating selection and classification decisions on either the enlisted or officer side (e.g., whether completion of training is given more weight than reenlistment probability). Having this clarity is critical for evaluating the effectiveness of selection and classification systems, because the assessments and evaluation processes the Air Force uses to make selection and classification decisions are certain to have different relationships with the outcomes above. Without a clear delineation of the relative importance that the Air Force places on different outcomes, there is no clear, consistent standard for judging the effectiveness of Air Force selection and classification systems, nor evaluating potential enterprise-wide improvements to them. Thus, careful consideration is warranted to (a) identify/confirm the individual Airman outcomes that the Air Force aims to impact through selection and classification and (b) clearly specify the relative importance of those outcomes when making selection and classification decisions. Such information is critical for any groups within the Air Force that may be tasked with evaluating and tracking the effectiveness of Airmen selection and classification decisions at scale (as described in Chapter 3, being “mission responsive” is a key attribute of an ideal system)
Beyond the traditional outcomes mentioned above, the committee would be remiss not to mention the potential role of Airmen “fit” (e.g., finding people that are a good fit for a career field or the Air Force in general, finding career fields that are a good fit for an Airman or recruit; see Kristof-Brown et al., 2005 for a meta-analysis) and how it is viewed by the Air Force with respect to selection and classification. Throughout the study, the importance of fit came up on numerous occasions. Notions of person-job fit, person-occupation fit, and person-organization fit have been extensively studied in the industrial and organizational psychology literature, but it is not clear how the Air Force conceptualizes and operationalizes fit within its selection and classification processes. For example, one possibility is to view an Airman’s perceived fit with their AFSC (e.g., with respect to their knowledge, skills, or abilities for that career) or the Air Force in general (e.g., with respect to their personality, interests, or values compared to the “culture” and mission of the Air Force) as a valued outcome in and of itself that can only be assessed while an Airman is in service. The framing here conceptualizes perceived fit as an important precursor of an Airman’s performance or retention in an AFSC (or other outcomes of interest to the Air Force that can only be assessed in conjunction with job performance while in the service or through an Airman’s decision to remain or separate
from the service), and a variable that may be predicted by selection and classification processes.
Additionally (or alternatively), the Air Force may frame fit as something that can be assessed at the time of the selection and classification decision by more objectively comparing cognitive, personality, and interest profiles of recruits (e.g., as revealed through ASVAB, TAPAS, or AF-WIN scores) for similarity to the cognitive, personality, and interest requirements of various AFSCs and the Air Force in general. To some extent, the Air Force already uses the ASVAB in this manner by establishing different ASVAB-related subtest score requirements for different AFSCs, but there is potential to expand this to non-cognitive assessments such as TAPAS and interest assessments, which are not currently used to help evaluate fit to various AFSCs when making selection and classification decisions for either enlisted or officer candidates.
Developing Criterion Measures to Assess Valued Outcomes
As noted above, stakeholders identified several criteria that the Air Force values: performance in training, successful completion of training, performance on the job, successful completion of their enlistment term, and re-enlistment after their current term. (This list includes those criteria the committee heard about; it may not be exhaustive.) Once valued Airmen outcomes are identified (including consideration of outcomes the Air Force wishes to avoid, such as person-job misfit or undesired behaviors that undermine team performance), a clear specification of the nature and scope of those outcomes is necessary to enable their measurement—typically, measures of outcomes are called criterion measures. For job-performance-related outcomes, this means developing clear definitions of performance for each AFSC (and each rank within each AFSC) that have a foundation in competency models or job analyses for AFSCs, yet are also reflective of the extensive research literature that exists on major dimensions of job performance (e.g., Campbell and Wilmot, 2018). Once developed, the Air Force would be in a position to establish a research infrastructure to routinely evaluate shifts in how performance is defined within AFSCs and at different ranks over time.
Depending on the Air Force’s strategy for validating its selection and classification processes, it may not be necessary to develop or maintain actual measures of Airmen job performance for each AFSC/rank.55 Under
55 To be clear, the committee is not suggesting that efforts to develop or maintain job performance measures for each AFSC/rank would be unbeneficial for administrative purposes, such as Airmen performance appraisal or providing Airmen performance feedback. The suggestion here pertains to job performance measures specifically to be used in a research capacity for evaluating selection and classification assessments and decision-making. It is important to draw this distinction because job performance measures collected for administrative purposes
older professional standards, there was a tendency to view what is known as “criterion-related evidence of validity” as the gold standard for evaluating whether selection and classification processes relate to subsequent valued outcomes. In this context, “criterion-related evidence of validity” means establishing an empirical association (e.g., correlation) between scores on a selection/classification assessment, or more generally a given selection/classification decision, and scores on a measure of subsequent job performance. However, the latest professional guidelines recognize that there are numerous ways of establishing evidence of the efficacy of a selection/classification process, without actually having incumbent scores on a measure of job performance (e.g., see sections on content-based evidence and transportability in SIOP, 2018; see also Cronbach, 1980; McPhail, 2007).
These developments are particularly pertinent to the Air Force for two reasons. First, the process of developing, maintaining, and collecting data on current job performance measures at a meaningful level of detail for several hundred AFSCs would be a time- and cost-intensive endeavor. Second, even if high-quality measures were universally available to support validation work Air Force wide, some AFSC/rank combinations are so small that reliable empirical associations between scores on a selection/classification assessment and scores on a measure of subsequent job performance could not be established. Consequently, the Air Force may benefit from alternative, modern approaches to evaluating validity evidence for its selection and classification processes (e.g., leveraging modern developments in synthetic validation,56 conducting transportability studies,57 leveraging meta-analytic evidence, combining samples with the U.S. Navy in similar career fields, developing content-oriented validation arguments; see McPhail, 2007 for further examples). These approaches would not require development and ongoing maintenance of job performance measures for a large number
often exhibit characteristics that make them problematic to use as criteria when validating selection and classification processes; the issues with using administrative performance appraisals as criteria in validation research has been widely recognized and discussed in the research literature (e.g., SIOP, 2018).
56 According to the American Psychological Association dictionary, synthetic validity “involves systematically analyzing a job into its elements, estimating the validity of the test or predictor in predicting performance on each of these elements, and then combining the validities for each element to form an estimate of the validity of the test or predictor for the job as a whole. Synthetic validity can be useful in estimating the validity of selection procedures in small organizations where the larger samples required in concurrent validity and predictive validity are not available” (see https://dictionary.apa.org/synthetic-validity). For a review of synthetic validity, see McPhail, 2007.
57 According to the Uniform Guidelines (Section 7B) of the Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, and Department of Justice (1978), the process of “transportability” allows “use of criterion-related validity evidence from other sources” to be borrowed or “transported” to another under certain conditions.
of AFSC/ranks simply to support selection and classification research/evaluation. If the Air Force considers using administrative data as measures of Airmen job performance, such measures should meet professional best practices for quality (e.g., exhibit meaningful, reliable variance that is free from contamination; as outlined in SIOP, 2018 and AERA, APA, and NCME, 2014).
With respect to retention-related outcomes (e.g., training completion, attrition, re-enlistment, post-ADSC continuance), clearly specifying and defining their nature and scope involves multiple considerations. First, it means getting specific about the “maturity” of the retention measure the Air Force wants to impact through its selection and classification processes (e.g., initial entry training attrition, in-unit attrition, attrition through a given time point in an initial term of service or ADSC). Second, it involves specifying the type of retention it aims to impact (e.g., attrition due to behavioral/performance reasons or lack of satisfaction with one’s AFSC).
An infrastructure that accurately captures when and why Airmen leave active duty is necessary to support use of retention-related outcomes for purposes of evaluating validity evidence for selection and classification processes. In putting this infrastructure in place, it is important to note that this is more than simply an information technology or database management issue. Clearly, the Air Force currently has systems that warehouse data on if and when Airmen leave the force, but not in the detail required for longitudinal research on retention strategies. For example, exit interviews are conducted sporadically and the contextual data surrounding exit strategies are not collected. As such, the quality with which the Air Force maintains accurate indicators of why Airmen leave the force is unclear. Furthermore, a centralized human capital data superstructure that included separation and retention data would facilitate analysis of the quality of Airmen that choose to remain or separate (e.g., are high performers or low performers in a particular career field more likely to leave?).
Based on past research, there appears to be a non-trivial amount of “noise” in formal separation codes used more generally in the U.S. military, and often maintained to some degree in formal human capital information systems (e.g., see Strickland, 2005). Those codes may or may not reflect the real reason (or combination of reasons) an Airman has left the service. Therefore, the Air Force could benefit from evaluating the accuracy of the separation codes it uses to document Airmen departures, and the consistency with which those separation codes are used. It could also include qualitative data, which may be reflected in textual explanations that are resistant to keyword-type coding. Recent improvements to the Air Force exit survey to more effectively assess the factors that influence decisions to leave the Air Force and efforts to capture longitudinal changes to those influences provide important insight and opportunity for further research. However,
compared to the separation codes assigned to all exiting Airmen, in 2019, the exit survey completion rate was only 24 percent (5,749 total responses) of those with a date of separation. Of those, 72 percent of 2019 respondents were enlisted Airmen (E-1 to E-9) and 27 percent were officers (O-1 to O-6) (Mitchell, 2020). The low response rate presents opportunity for increased participation. Analysis of the detailed data captured in the exit survey is linked to A1 policies and programs, but could also be used to improve accuracy and relevance of separation codes and other analysis if the data were accessible to researchers through a human capital data superstructure.
Such research could inform improvements as to how the Air Force documents separation reasons and the data that result. An accurate picture of reasons Airmen separate or attrit (including attention to dissatisfaction with assignments) can also help to reveal those types of separations that could meaningfully be predicted by selection and classification decision processes (e.g., attrition due to behavioral/performance reasons or lack of satisfaction with one’s AFSC or specific job assignments and/or better work opportunities externally) and those types of separations that cannot be readily predicted by such processes (e.g., attrition due to reasons beyond an Airman’s direct control, such as injuries not tied to pre-existing conditions, working with a hostile superior [Hanges, 2019] or peers, or working in a poor environment). Ideally, when evaluating the efficacy of selection and classification for impacting retention-related outcomes such as attrition, the Air Force would base such evaluation on types of separation decisions that can be used to effectively update selection and classification processes. A lack of clear data on why Airmen are leaving the force diminishes the Air Force’s ability to do such evaluations or establish an “early warning system” to identify recruits at risk of early attrition. Increased use of assignment tools like the Talent Marketplace will allow relevant longitudinal data to be collected on the extent to which Airmen’s separation decisions may be related to their preferences over the positions to which they could have been assigned as compared to the one to which they were assigned (see Appendix D).
Current entry-point testing and other assessment programs in use by the Air Force and described above raise several questions regarding the future of entrance standards and potential improvements to assessment processes.58 Below, the committee reviews four central questions:
58 Entrance standards also include medical and physical standards. The committee’s data collection (and this set of research questions) did not explore the research base behind those standards, but questions about cognitive standards also apply to those other standards (see discussion of waivers to standards below).
Can modern computational capabilities be leveraged to produce a more comprehensive, weighted composite of information available during the recruiting process that would improve upon the current AFQT composite for initial eligibility screening?
In particular, Assembling Objects, which replaced Numerical Operations and Coding Speed in the ASVAB in 2002, measures spatial abilities that are relevant in STEM fields and Air Force specialties, such as pilots, engineers, and perhaps unmanned vehicle operators (e.g., Humphreys et al., 1993; Kell et al., 2013; Robertson et al., 2010). Over the course of its study, the committee heard from numerous stakeholders (particularly among trainers and career field managers of pilots, including those of remotely piloted vehicles, cyber warriors, and aircraft maintainers) who expressed concern about whether entry standard composites accurately reflected (sometimes rapidly) changing service and occupational needs.59Depending on the flow of applicants and accumulation of data for a given AFSC, modern computing power would allow entry standard composite algorithms to be reset and updated on regular intervals.
- How can non-cognitive and biographical data measures that predict citizenship (individual behaviors in groups) and counterproductive components of job performance (Rotundo and Sackett, 2002) be used for eligibility screening?
General cognitive ability tests, such as the ASVAB, primarily predict task performance (i.e., core technical job proficiency) (Campbell, 1990; McHenry et al., 1990) and retention (Strickland, 2005), but this leaves large assessment gaps in areas important for a successful Air Force career, especially for success in leadership positions. Over the course of the study, the committee heard repeatedly of a desire for early identification of and non-selection or counseling to avoid negative consequences to individual Airmen, teams, and larger units through toxic leadership, sexual harassment, or other counterproductive behaviors.60Expanding the types of assessments conducted can enable better identification and “screening out” of candidates demonstrating negative and toxic behavioral attributes.
59 Important discussions on this topic occurred during committee data-gathering sessions with Air Force Headquarters at the Pentagon (September 26–27, 2019), Air Education and Training Command at Joint Base San Antonio-Randolph (November 5–8, 2019), and 2nd Air Force at Keesler Air Force Base in Biloxi, MS (January 9, 2020), and with CMSgt Kaleth O. Wright, USAF, Chief Master Sergeant of the Air Force (November 21, 2019).
60 Concerns over toxic leadership and other counterproductive behaviors were raised in one form or another at almost every site visit or briefing by USAF representatives.
What technological advances or administration procedures (e.g., secure data transfer) are necessary to assess applicants (in whole or in part) via the internet and to use modern security/proctoring and aberrance detection methods to verify scores and prevent cheating in order to reduce or expedite MEPS testing?
Although some assessments are already web-based (e.g., the AF-WIN job interest survey and, for officers, the Test of Basic Aviation Skills (TBAS) has undergone trials for remote proctoring and mobile testing), there is room and need for expanded capabilities especially into those assessments where cheating or faking may be more likely. This question becomes especially important when considering the potential of additional entrance assessments, because any expansion would likely require additional time that could disrupt the flow of applicants through the MEPS. In particular, USAFRS representatives conveyed that applicants’ time at the MEPS (typically two full days) is already fully scheduled and allocating addition time for individual testing is not feasible. Further, there is a higher-level desire to allocate MEPS time to administer tests to more applicants, rather more tests to individual applicants.61
- How can emerging assessment technologies be used to improve accessions decisions, specifically selection and classification processes?
Downstream research in selected subpopulations of Airmen working in assigned careers and specific jobs may offer important data on successful and unsuccessful person-job matches that can be combined with emerging assessment technologies to improve entrance standards into particular career fields. Those emerging technologies, such as serious gaming (games designed for a primary purpose other than entertainment), virtual reality simulation, and sensor-based measurement, are not useful for large-scale screening currently (Landers, 2019; Patrick, 2019), but they may be beneficial for improved selection into a particular AFSC from among a subset of AFSCs identified through traditional processes (the Air Force is already using some of these for recruitment and training purposes, so their expanded use should be relatively easy to implement62).
61 The committee met with several representatives of USAFRS during a committee meeting (July 30, 2019) and in conjunction with the site visit to Joint Base San Antonio-Randolph (November 5–8, 2019).
62 The use of such technologies for recruitment was described by representatives of USAFRS during a committee meeting in Washington, DC (July 30, 2019) and in conjunction with the site visit to Joint Base San Antonio-Randolph (November 5–8, 2019); their use in training was discussed numerous times over the course of the study, including during committee site visits to Air Education and Training Command at Joint Base San Antonio-Randolph (November 5–8, 2019) and Wright-Patterson Air Force Base (November 12–13, 2019).
Similar to the individually adaptive training approach of the Pilot Training Next initiative,63 emerging assessment technologies have the potential to enable applicants to more effectively and efficiently demonstrate their aptitudes, strengths, skills, and potential for success in a particular career field (as described in Chapter 3, being “innovative yet disciplined” is a key attribute of an ideal system).
All military services confront the problem of the requirement to fill a vast array of jobs of varying complexity with people of varying ability who have only minimal familiarity with most of those jobs (resulting in consequences across the human capital system, including training and person-job fit, as shown in Figure 2-2). To identify which jobs a recruit is qualified to perform, each AFSC requires minimum MAGE qualification scores in areas relevant to the career field (e.g., Flight Engineer, G57; Intelligence Applications, A64). Following testing standards, routine job analyses are an essential component to ensure that test results are indeed valid predictors of subsequent job performance (AERA, APA, and NCME, 2014).
Specific to the needs of the Air Force, it is essential that AFQT and MAGE cut scores for more than 100 enlisted AFSCs64 remain effective; that potential impacts of adjusting AFQT and qualification area cut scores are fully understood; and that the potential benefits of using additional measures for person-job matching are considered. Furthermore, consideration should be given to better understanding the work experience of incoming recruits and better assessing the extent to which those experiences have predictive power for Air Force career performance, and so determining which experiences might usefully be considered in selection and classification processes. To collect information on incoming recruits, self-reporting of experience provides a preliminary starting point, but ultimately, skills and competency levels require verification through testing or other means of assessment.
In addition to exploring different composites and cut scores and facilitating feedback mechanisms to improve selection and classification processes, research into whether “fit” with one’s assigned or desired AFSCs is related to performance and retention could benefit the Air Force, particularly as a secondary mechanism for classification beyond minimum qualification standards. In the industrial and organizational psychology lit-
63 For more information, see: https://www.aetc.af.mil/About-Us/Pilot-Training-Next/.
64 Enlisted AFSC classifications are available at https://www.af.mil/About-Us/Fact-Sheets/Display/Article/104609/enlisted-afsc-classifications/. Note that some codes are not available at initial classification and others are used to administratively classify Airmen in special circumstances (e.g., awaiting retraining or a patient).
erature, several methods have been explored to assess person-environment fit (Edwards, 2008; Edwards et al., 2006; Kristof-Brown et al., 2005). In general, fit is assessed either by asking individuals how well they match a particular environment/job (perceived fit), or by separately measuring person and environment attributes and using a statistical method, such as polynomial regression, to assess congruence (Edwards, 2007; Nye et al., 2019). It is an open question whether congruence between individual and occupational needs is (or is not) more important than having uniformly high marks on measures of relevant KSAOs. Furthermore, expressed occupational preferences, to the extent they can be accurately collected (see Appendix D), may also be more important in effective person-job fit (there is a long history of labor research in this area, see for example, Freeman, 1978 and Green, 2010).
On the flip side, it is also unknown whether it is more beneficial to minimize misfit, rather than to maximize fit, as gross misfit may lead to poor performance, low job satisfaction, and early attrition. There is also the question of whether machine learning methods add value in these regards—especially to facilitate the development of algorithms capable of making decisions across a much larger number of variables and outcomes than could be handled through simple matching. For example, this could allow the concept of fit (or misfit) to be extended from individuals to team composition and role assignments (DeChurch, 2019), and to provide guidance to Airmen and position owners what assignments they might prefer. This is recognized as an issue that is important to understand since the vast majority of Air Force careers require individuals to work in teams. Especially as automation (e.g., robots and AI) increasingly is integrated into teams, a comprehensive research agenda looking at team dynamics would be useful to inform improved consideration of the assembly and organization of teams (DeChurch, 2019).
The foregoing discussion is not meant to imply that the Air Force has not considered fit when it comes to classification and future assignments. Although it can be difficult for potential recruits to develop informed preferences about AFSCs, there are resources available to help. Specifically, recruiters provide a primary source of information and official and unofficial online videos provide realistic previews. Later in an officer’s career, the recently implemented Talent Marketplace provides an innovative online means for matching Airmen with assignments based on expressed preferences of Airmen and position owners for post-accession job assignments. However, the usefulness of expressed preferences depends a good deal on how much information is available to Airmen about positions and to position owners about Airmen (through the Talent Marketplace), given that such information profoundly shapes preferences. Consequently, the most effective version of the Talent Marketplace would also serve as
an information marketplace that allows position owners and Airmen to make appropriate information available to each other to form informed, accurate preferences (e.g., realistic job previews, using video and written descriptions). The key point is that matching, and the overall functioning of the USAF HCM system, may be improved by developing new methods of sharing preferences as well as new algorithms for taking preferences into account (see Appendix D) along with the KSAOs, composites of KSAOs, and fit/misfit operationalized in different ways. Such research has impacts across the USAF HCM system, including person-job fit and training effectiveness (as shown in Figure 2-2).
Waivers to Standards
There are provisions in Air Force policy for applicants to request a waiver to most standards (USAF, 2019c). As with the standards themselves, good practice in HCM requires that at a minimum the rationale behind waiver decisions be documented. An ideal system would incorporate data-driven decisions into waiver policies. An ideal system would also subject its waiver policies to routine re-evaluation (as described in Chapter 3, being “accurately informed and informative” and “understood and trusted” are key attributes of an ideal system).
The Air Force could strengthen its waiver process by systematically documenting and examining the types of information authorities use when approving/denying waiver requests. The committee found no evidence during the course of the study that this has been done, nor that there is a system in place for tracking and periodically evaluating how such decisions are made. Examining the types of information authorities gather and how they use that information to make a decision will give the Air Force a critical baseline from which to start evaluating and improving the process of approving or denying waiver requests. Ideally, this would in turn positively impact downstream outcomes such as Airman performance, retention, and diversity.
For example, with the baseline described above, the Air Force would be in a position to analyze whether types of information being used to evaluate waiver requests actually relate to downstream outcomes such as Airman job performance, retention, and diversity. It is important for the Air Force to realize that the waiver approval/denial decisions are outcomes of assessments of individuals and, as such, they are personnel decisions that should be subject to the same scrutiny as any test or decision-making process an employer might use to hire people for a job. There are clear professional guidelines and standards that are pertinent for evaluating personnel decision-making processes and should be considered as the Air Force examines its waiver approval process (e.g., AERA, APA, and NCME, 2014; SIOP, 2018). A
critical component of these guidelines and standards is that the information used to make personnel decisions is linked to outcomes of value for the employer (e.g., employee performance and retention). In other words, unaligned entrance standards burden the system with waivers and make the recruiting enterprise unnecessarily harder for little or no valid reason.
As another example, with the baseline described above, the Air Force would be in a position to evaluate how consistently waiver approval/denial decisions are made (e.g., across individual decision-making authorities; across different types of standards—conduct, medical, physical, cognitive, etc.). Consistency is an important characteristic of effective personnel decision-making. For example, if some waiver approval authorities are using information or processes that have little relation to subsequent Airmen performance or retention, but other waiver approval authorities are using more valid information, that reflects an inconsistency in how personnel decisions are being made.
Without systematic examination of the issues above, authorities in charge of waiver approval may be systematically (albeit inadvertently) disadvantaging groups of applicants (e.g., by gender, race/ethnicity) thus potentially harming force diversity. Without systematic reviews of the system as a whole, the Air Force has no way to determine whether its current practice is any better or worse than randomly determining whose waiver gets approved and whose gets denied. By pursuing the research into the waiver approval process described above, the Air Force would be in a better position to clarify the types of information that waiver approval authorities should consider, and how that information should be combined to make waiver approval/denial decisions; this also has the advantage of more closely aligning with professional standards and guidelines.
How waiver approval/denial decisions are made is a basic question. Another fundamental question that underpins the decision process is whether a given standard is even necessary. As of now, the Air Force, in part driven by Department of Defense policy, maintains clear thresholds for its entrance standards including various medical, conduct, physical, and cognitive standards as described in the first part of this chapter. Though the thresholds are clearly specified in policy documents (e.g., USAF Manual 36-2032 [USAF, 2019c]), what is often not specified is the rationale or research foundation for them.65 As with research into the waiver approval/denial decision process described above, the Air Force would benefit from a systematic examination of how these standards were established, what, if
65 Two notable exceptions include repeated analysis demonstrating (1) significantly higher rates of completing a normal term of service correlated with achievement of a high school diploma versus cohorts without and (2) demonstrated performance improvements associated with higher AFQT scores (RAND, 2005).
any, evidence exists that those criteria actually bear relation to subsequent Airman performance, retention, and diversity, and whether the standards are equally defensible across AFSCs. To facilitate this research, especially for AFSCs with small numbers of Airmen and/or standards that receive few waivers, the Air Force may be able to partner with other services to increase sample sizes for service-similar occupational specialties or waivers.
As an example, throughout this study, the committee heard about how changes in mission and technology are creating shifts in the nature of work required by Airmen in various AFSCs (e.g., in the rapidly evolving cyber domain and among pilots coordinating multiple increasingly automatic systems66) (Atkins, 2020). Based on what the committee has heard, it is not clear whether the basis (e.g., rationale and research foundation) is understood for all medical, conduct, physical, and cognitive standards. Nor is it clear what processes are in place to ensure standards remain valid given variation across AFSCs and changes to the nature of Air Force work (as described in Chapter 3, being “mission responsive” is a key attribute of an ideal system). For example, there are numerous types of waivers for various medical conditions. Has there been research that has evaluated the relationship between those medical conditions and performance in each AFSC (Baumgartner, 2019)? The standards themselves may not be tailored enough to reflect the needs of individual AFSCs. Yet these standards have the potential to bar entry to promising recruits into an AFSC where the medically or physically disqualifying condition in question is not pertinent to performing well in a given AFSC (e.g., imposing overly stringent medical/physical standards on cyber-focused AFSCs).67 However, the committee emphasizes that changes to standards should be considered in light of the ecosystem model (see Figure 2-2) to understand potential long-term consequences in other areas.
In considering potential changes to standards, a critical examination of the thresholds the Air Force uses for requiring waivers in the first place (at the level of individual AFSC) would provide important information upon which to base human capital policy and individual human capital decisions. A myriad of examples of research into waiver thresholds can be found in past military personnel research literature (some of which has
66 The issue of technology and changing work requirements was raised during multiple committee data-gathering sessions including sessions with Air Force Headquarters at the Pentagon (September 26–27, 2019); Air Education and Training Command at Joint Base San Antonio-Randolph (November 5–8, 2019); Air Force Institute of Technology at Wright-Patterson Air Force Base, Dayton, OH (November 13, 2019); 2nd Air Force at Keesler Air Force Base, Biloxi, MS (January 9, 2020); and USAFA, Colorado Springs, CO (January 17, 2020).
67 Information provided to the committee during data-gathering sessions at Air Education and Training Command at Joint Base San Antonio-Randolph (November 5–8, 2019) and Air University, Maxwell Air Force Base, Montgomery, AL (January 15, 2020).
been done by the Air Force, e.g., Putka and Allen, 2008; see also Strickland, 2005, and Putka et al., 2003), but such studies have generally been focused on force-wide policy and not targeted toward individual occupations.
The expected benefit to the Air Force resulting from research into thresholds that trigger the requirement of waivers by AFSC is twofold. First, for hard-to-fill AFSCs (e.g., cyber), the Air Force may stand to benefit from relaxing certain standards (e.g., physical fitness or chronic controllable conditions) or relaxing the amount of time required to meet physical fitness standards from 8 weeks to 18 months if those standards have little or no relation to subsequent Airmen performance or retention in those AFSCs (as described in Chapter 3, being “agile and flexible” is a key attribute of an ideal system; although consideration should be given to long-term consequences, such as medical care, across the human capital ecosystem, especially consequences that may occur if Air Force standards differ from DoD standards). Thus, by revisiting the standards particularly for hard-to-fill AFSCs, the Air Force has a way to potentially increase the pool of eligible Airmen with no detriment to subsequent performance or retention.
Second, imposing standards that bear little relation to subsequent performance or retention clearly goes against professional standards and guidelines with respect to personnel decision-making (e.g., AERA, APA, and NCME, 2014; SIOP, 2018). Thus, from the perspective of professional standards and best practices, it is imperative the Air Force systematically and periodically examine this issue in order to adjust, as appropriate, entry standards and requirements for waivers to those standards. However, any changes to standards should be considered across the entire human capital ecosystem (see Figure 2-2) to identify how single-point changes could potentially affect distal system components (e.g., retirement and medical benefits and cost).
The Air Force can be justly proud of its recruiting, selection, and classification systems, which produce widely admired results on a scale matched only by the other military services. But the challenges ahead require that it correct the failings and weaknesses its leaders acknowledge—that it commit to continuous improvement. This chapter identifies those issues the committee believes most deserve attention to improve Air Force the accessions processes of recruitment, selection, and classification.
This chapter began by discussing the importance of clarity about desired outcomes. Without a clear delineation of the relative importance the Air Force places on different outcomes, there is no clear, consistent standard for judging system effectiveness, nor for prioritizing improvements. One set of outcomes is the issue of fit: persons with jobs, with occupations,
and with organizations. Is fit more (or less) important than high marks on KSAOs? Is the issue perhaps misfit that leads to poor performance, low satisfaction, and early attrition? The Air Force’s currently confused set of competency models may be holding it back from developing and leveraging the full potential of its Airmen. The committee found little alignment among competency models used across the Air Force, with the risk that the models may establish inconsistent competency expectations across a career trajectory. Aligned competency models can help track Total Force capacities across skill areas, helping to recognize talent misalignment and adapt training quickly.
Whatever outcomes the Air Force selects, it must choose the criteria that define success, and develop a deep understanding of how its HCM system affects those criteria. For example, an infrastructure that accurately captures when and why Airmen leave active duty is essential to using retention-related outcomes for purposes of evaluating validity evidence for selection and classification processes. Downstream metrics on Airmen working in assigned careers and specific jobs may help the Air Force understand success in person-job matches that can be used to improve entrance standards, using contemporary computing power to create and continuously refine entry algorithms. Better entry algorithms will help Air Force screening—including potentially screening out those with negative or toxic behavioral attributes.
Addressing these issues requires data—data that now are scattered across Air Force organizations. The Air Force needs a “Data Superstructure” that provides appropriate access to individual histories, including test scores, permanent ratings, and fit indices. And it must commit to exploiting those data in a purposeful way, starting with the job and occupational analysis that is the foundation of selection and classification algorithms, to embrace eventually a program of research that looks beyond initial selection and classification to questions of why it is sometimes willing to waive those standards, and ultimately to questions of leadership and teamwork that may improve the way the Air Force approaches its accessions processes.
AERA (American Educational Research Association), APA (American Psychological Association), & NCME (National Council on Measurement in Education). (2014). Standards for educational and psychological testing. Washington, DC: Authors.
Asch, B.J., J.A. Romley, & M.E. Totten. (2005). The Quality of Personnel in the Enlisted Ranks. Santa Monica, CA: RAND Corporation. Available: https://www.rand.org/content/dam/rand/pubs/monographs/2005/RAND_MG324.sum.pdf.
Atkins, E. (2020, February 20). Future DoD Workforce: Seamless Human-Systems Integration. Virtual presentation to the committee. Presentation materials available by request through the study’s public access file.
Barelka, A., L. Barron, M. Coggins, S. Hernandez, & P. Kulpa. (2019). Development and Validation of Air Force Foundational Competency Model. Technical Report No. 1. Air Education and Training Command. Available by request through the study’s public access file.
Barron, L. (2019, July 30). Force Development Competencies: Implications for Personnel Selection and Classification. Air Education and Training Command. Presentation to the committee, Washington, DC. Presentation materials available by request through the study’s public access file.
Baumgartner, N. (2019, November 7). Air Force Physical Fitness. Presentation to the committee, Joint Base San Antonio-Randolph, San Antonio, TX. Presentation materials available by request through the study’s public access file.
Bennett, W. (2019, November 13). Competency-Based Methods for Current and Future Ops Training Overview. Presentation to the committee, Wright-Patterson Air Force Base, Dayton, OH. Presentation materials available by request through the study’s public access file.
Brown, C.Q. Jr. (2020). Senate Armed Service Committee Advance Policy Questions for General Charles Q. Brown, Jr., U.S. Air Force Nominee for Appointment to be Chief of Staff of the Air Force. Available: https://www.armed-services.senate.gov/imo/media/doc/Brown_APQs_05-07-20.pdf.
Campbell, J.A. (1990). An overview of the Army selection and classification project (Project A). Personnel Psychology, 43(2), 231–239.
Campbell, J.A., & M.P Wilmot. (2018). The functioning of theory in industrial, work and organizational psychology (IWOP). In D.S. Ones, N. Anderson, C. Viswesvaran, & H.K. Sinangil (Eds.), The SAGE handbook of industrial, work and organizational psychology: Personal psychology and employee performance (pp. 3–37). London, UK: Sage.
Campbell, S. (2019, September 26). Field Testing Synthesis. Presentation to the committee, Pentagon, Arlington, VA. Presentation materials available by request through the study’s public access file.
Campion, M.C., D.J. Schepker, M.A. Campion, & J.I. Sanchez. (2019). Competency modeling: A theoretical and empirical examination of the strategy dissemination process. Human Resource Management, 59(3), 291–306.
Coggins, M. (2019). Bullet Background Paper on Foundational and Occupational Competency Development. Air Education and Training Command. Available by request through the study’s public access file.
Constable, S., & B. Palmer, Eds. (2000). The Process of Physical Fitness Standards Development. Dayton, OH: Human Systems Information Analysis Center. Available: https://pdfs.semanticscholar.org/dfd7/57ea798f7b8e50b775a6b4dffbe1e667b798.pdf.
Cronbach, L.J. (1980). Validity on parole: How can we go straight? In W.B. Schrader (Ed.), Measuring achievement: Progress over a decade. New directions for testing and measurement, no. 5 (pp. 99–108). San Francisco, CA: Jossey-Bass.
DeChurch, L.A. (2019, November 22). Leading Teams. Presentation to the committee, Washington, DC. Presentation materials available by request through the study’s public access file.
Dillenburger, S., J. Caussade, D. Gaertner, & M. Davis. (2019, November 6). AETC Studies and Analysis Squadron. Presentation to the committee, Joint Base San Antonio-Randolph, San Antonio, TX. Presentation materials available by request through the study’s public access file.
DoD (Department of Defense). (2013). Qualitative Distribution of Military Manpower. DoD Instruction 1145.01. Available: https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/114501p.pdf.
DoD. (2018). Summary of the 2018 National Defense Strategy of the United States of America. Available: https://dod.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf.
Edwards, J.R. (2007). Polynomial regression and response surface methodology. In C. Ostroff & T.A. Judge (Eds.), Perspectives on organizational fit (pp. 361–372). San Francisco, CA: Jossey-Bass.
Edwards, J.R. (2008). Person-environment fit in organizations: An assessment of theoretical progress. Academy of Management Annals, 2(1), 167–230.
Edwards, J.R., D.M. Cable, I.O. Williamson, L.S. Lambert, & A.J. Shipp. (2006). The phenomenology of fit: Linking the person and environment to the subjective experience of person-environment fit. Journal of Applied Psychology, 91(4), 802–827.
Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor & Department of Justice. (1978). Uniform guidelines on employee selection procedures. Federal Register, 43(166), 38290–38315.
Freeman, R.B. (1978). Job satisfaction as an economic variable. American Economic Review, 68(2), 135–141.
Green, F. (2010). Well-being, job satisfaction and labour mobility. Labour Economics, 17(6), 897–903.
Hanges, P.J. (2019, November 22). Consequences of Abusive and Toxic Leadership. Presentation to the committee, Washington, DC. Presentation materials available by request through the study’s public access file.
Hernandez, S. (2019, November 22). Cybersecurity Workforce Development. Presentation to the committee, Washington, DC. Presentation materials available by request through the study’s public access file.
Humphreys, L.G., D. Lubinski, & G. Yao. (1993). Utility of predicting group membership and the role of spatial visualization in becoming an engineer, physical scientist, or artist. Journal of Applied Psychology, 78(2), 250–261.
Kell, H.J., D. Lubinski, C.P. Benbow, & J.H. Steiger. (2013). Creativity and technical innovation: Spatial ability’s unique role. Psychological Science, 24(9), 1831–1836.
Kristof-Brown, A.L., R.D. Zimmerman, & E.C. Johnson. (2005). Consequences of individuals’ fit at work: A meta analysis of person–job, person–organization, person–group, and person–supervisor fit. Personnel Psychology, 58(2), 281–342.
Landers, R.N. (2019, November 22). Modern Technologies for High Quality Psychometric Assessment. Presentation to the committee, Washington, DC. Presentation materials available by request through the study’s public access file.
Lecznar, W.B., and J.K. Klesch. (1965). Development and preliminary validation of the Electronic Data Processing Test-63. Air Force Lackland Air Force Base: Personnel Research Lab.
Lytell, M.C., S. Robson, D. Schulker, T.C. McCausland, M. Matthews, L.T. Mariano, & A.A. Robbert. (2018). Training Success for U.S. Air Force Special Operations and Combat Support Specialties: An Analysis of Recruiting, Screening, and Development Processes. Santa Monica, CA: RAND Corporation. Available: https://www.rand.org/content/dam/rand/pubs/research_reports/RR2000/RR2002/RAND_RR2002.pdf.
McHenry, J.J., L.M. Hough, J.T. Toquam, M.A. Hanson, & S. Ashworth. (1990). Project A validity results: The relationship between predictor and criterion domains. Personnel Psychology, 43(2), 335–354.
McPhail, S.M. (2007). Alternative validation strategies: Developing new and leveraging existing validity evidence. San Francisco, CA: Jossey-Bass.
Miller, B. (2019, November 22). Selection and Assessment Practices, Ford Motor Company. Presentation to the committee, Washington, DC. Presentation materials available by request through the study’s public access file.
Mitchell, M. (2020, July 8). 2019 Exit Survey: Summary. Virtual presentation to the committee. Presentation materials available by request through the study’s public access file.
Murphy, P. (2019, November 22). Corporate Human Resources and Selection Practices: UPS Airlines. Presentation to the committee, Washington, DC. Presentation materials available by request through the study’s public access file.
Myers, M. (2020). Special operations using Artificial Intelligence, personality traits to recruit and select. Military Times. Available: https://www.militarytimes.com/news/your-military/2020/05/13/special-operations-using-artificial-intelligence-personality-traits-to-recruit-and-select.
NRC (National Research Council). (2015). Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession. Washington, DC: National Academies Press.
Nye, C.D., J. Prasad, J. Bradburn, & F. Elizondo. (2019). Improving the operationalization of interest congruence using polynomial regression. Journal of Vocational Behavior, 104, 154–169.
Patrick, C.J. (2019, November 22). Neurobehavioral Traits, Mental Health, and Adaptive Performance: A Multi-Modal Assessment Framework. Presentation to the committee, Washington, DC. Presentation materials available by request through the study’s public access file.
Pavelko, J. (2017). USSOCOM Implementation Plan Progress. U.S. Special Operations Command. Available: https://dacowits.defense.gov/Portals/48/Documents/General%20Documents/RFI%20Docs/June2017/SOCOM%20RFI%202.pdf.
Picano, J.J., R.R. Roland, T.J. Williams, & P.T. Bartone. (2019). Assessment and Selection of High-Risk Operational Personnel: Processes, Procedures, and Underlying Theoretical Constructs. Document provided to the committee by Robert Roland. Available by request through the study’s public access file.
Putka, D.J., & M.T. Allen. (2008). An empirical evaluation of the United States Air Force’s enlistment waiver policy (FR-08-21). Alexandria, VA: Human Resources Research Organization.
Putka, D.J., C.L. Noble, D.E. Becker, & P.F. Ramsberger. (2003). Evaluating moral character waiver policy against servicemember attrition and in-service deviance through the first 18 months of service (FR-03-96). Alexandria, VA: Human Resources Research Organization.
Ree, M.J., & J.A. Earles. (1991). Predicting training success: Not much more than g. Personnel Psychology, 44(2), 321–332.
Roberson, D.L., & M.C. Stafford. (2017). The Re-Designed Air Force Continuum of Learning: Rethinking Force Development of the Future. LeMay Paper Number 1. Air University, Maxwell Air Force Base, Montgomery, AL. Available: https://media.defense.gov/2017/Dec/05/2001852390/-1/-1/0/LP_0001_ROBERSON_STAFFORD_REDESIGNED_AIR_FORCE.PDF.
Robertson, K.F., S. Smeets, D. Lubinski, & C.P. Benbow. (2010). Beyond the threshold hypothesis: Even among the gifted and top math/science graduate students, cognitive abilities, vocational interests, and lifestyle preferences matter for career choice, performance, and persistence. Current Directions in Psychological Science, 19(6), 346–351.
Roland, R. (2019, November 22). Selection and Assessment of High-Risk Operational (or Any) Personnel: Defining a Model. Presentation to the committee, Washington, DC. Presentation materials available by request through the study’s public access file.
Rotundo, M., & P.R. Sackett. (2002). The relative importance of task, citizenship, and counterproductive performance to global ratings of job performance: A policy-capturing approach. Journal of Applied Psychology, 87(1), 66–80.
Rumsey, M.G., & J.M. Arabian. (2014). Military enlistment selection and classification: Moving forward. Military Psychology, 26(3), 221–251.
Sands, W.A., B.K. Waters, & J.R. McBride (Eds). (1999). CATBOOK Computerized Adaptive Testing: From Inquiry to Operation (Report no. HUMRRO-FR-EADD-96-26). Alexandria, VA: Human Resources Research Organization.
Sims, C.S., C.M. Hardison, K.M. Keller, & A. Robyn. (2014). Air Force Personnel Research: Recommendations for Improved Alignment. Santa Monica, CA: RAND Corporation. Available: https://www.rand.org/pubs/research_reports/RR814.html.
SIOP (Society for Industrial and Organizational Psychology). (2018). 2018 Principles for the Validation and Use of Personnel Selection Procedures. Available: https://www.apa.org/ed/accreditation/about/policies/personnel-selection-procedures.pdf.
Strickland, W.J. (Ed.). (2005). A Longitudinal Examination of First Term Attrition and Reenlistment among FY 1999 Enlisted Accessions (Technical Report 1172). Arlington, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.
Thompson, N. (2007). Enlisted Selection and Classification Tests: Precursors of the ASVAB (Report no: AFCAPS-FR-2012-0005). Air Force Personnel Center Strategic Research and Assessment HQ AFPC/DSYX. Available: https://apps.dtic.mil/dtic/tr/fulltext/u2/a594291.pdf.
USAF (U.S. Air Force). (2013). Classifying Military Personnel (Officer and Enlisted). Air Force Instruction 36-2101. Available: https://static.e-publishing.af.mil/production/1/af_a1/publication/afi36-2101/afi36-2101.pdf.
USAF. (2015a). Human Capital Annex to the USAF Strategic Master Plan. Available: https://www.af.mil/Portals/1/documents/Force%20Management/Human_Capital_Annex.pdf.
USAF. (2015b). USAF Strategic Master Plan. Available: https://www.af.mil/Portals/1/documents/Force%20Management/Strategic_Master_Plan.pdf.
USAF. (2018). Active Duty Service Commitments. Air Force Instruction 36-2107. Available: https://static.e-publishing.af.mil/production/1/af_a1/publication/afi36-2107/afi36-2107.pdf.
USAF. (2019a). Personnel Assessment Program. Air Force Manual 36-2664. Available: https://static.e-publishing.af.mil/production/1/af_a1/publication/afman36-2664/afman36-2664.pdf.
USAF. (2019b). Competency Modeling. Air Force Handbook 36-2647. Available: https://static.e-publishing.af.mil/production/1/af_a1/publication/afh36-2647/afh36-2647.pdf.
USAF. (2019c). Military Recruiting and Accessions. Air Force Manual 36-2032. Available: https://static.e-publishing.af.mil/production/1/af_a1/publication/afman36-2032/afman36-2032.pdf.
USAFRS (United States Air Force Recruiting Service). (2019a). Perspectives on Recruiting. Presentation to the committee, Joint Base San Antonio-Randolph, San Antonio, TX. Presentation materials available by request through the study’s public access file.
USAFRS. (2019b, November 6). Datasheets provided to the committee by Angelo Haygood. Available by request through the study’s public access file.
Yusko, K.P. (2019, November 22). Assessing and Developing Talent in Demanding Environments: Lessons Learned from the NFL Combine. Presentation to the committee, Washington, DC. Presentation materials available by request through the study’s public access file.