Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
4 Characteristics of a High-Quality Process for Determining Disability Resulting from Traumatic Brain Injury This chapter discusses general issues related to the accuracy of the disability determination process; in particular, it examines the characteristics that define a high-quality process for determining disability resulting from traumatic brain injury (TBI), including the reliability and validity of the assessments themselves. It also discusses the characteristics of good process indicators and approaches to reducing variability in the disability determination of TBI residuals. The chapter begins with a discussion of quality domains, including reliability, validity, burden, transparency, and credibility. It explains the importance of differentiating those domains in understanding process and outcome quality, and it comments on the many other factors that the Veterans Benefits Administration (VBA) must take into consideration when applying those concepts. Building on the descriptions in Chapter 3 of VBAâs current adjudication process for residuals of TBI and the quality indicators that VBA currently measures, this chapter evaluates those and provides considerations for additional indicators that could be used to assess and improve the disability rating process. The committee has included this chapter in its report because in discussions with VBA officials, there was great emphasis on the consistency of the rating process itself (as noted in Chapter 3), rather than on the outcome of the disability determinations. Consistency of process was presented as an end in and of itself, with less of a focus on ensuring the reliability and validity of the assessments, i.e., the characteristics of the process needed to ensure that the veteran had been given an accurate disability rating. VBA has taken great pains to train its raters so that they might accurately and reliably rate a disability; however, the emphasis on consistency of process does not actually ensure the reliability or the validly of the rating. Furthermore, and just as importantly, a lack of consistency in process does not necessarily mean there is a lack of reliability or validity. It is plausible that those factors are related to assessment performance, but is not guaranteed to be true. DEFINITIONS OF QUALITY DOMAINS The determination of disability can be conceptualized as an assessment or measurement process whose components include all the steps in the diagnosis, evaluation, and disability rating of the residuals of TBI, resulting in an overall disability assessment for the veteran. The overall 83
84 TRAUMATIC BRAIN INJURY IN VETERANS quality of the disability determination process is multifactorial and includes aspects of process (e.g., the transparency of the process, the burden to the veteran) and outcome (e.g., the reliability and validity of disability determinations). Validity, in this context, is the degree to which the disability determination process results in the correct quantitative result for each veteran evaluated, over a wide range of injury severity, veteran characteristics, and geographic locations (Price, 2016). The committeeâs review of the Department of Veterans Affairsâ (VAâs) quality assurance measures found that the VAâs quality measures focus on consistency within the disability rating step of the process (see Chapter 3). The VA assesses the quality of its disability ratings through regional quality review teams (QRTs) and the national Systematic Technical Accuracy Review (STAR). The QRTs identify individual rater-level errors and facility-level error trends. A member of the QRT notes critical errors, such as an incorrect effective date or an application that was approved that should have been denied. The STAR review uses a checklist to measure how consistently claims comply with VBAâs policies and procedures. Thus, the committee concluded that the VA defines quality primarily based on adherence to its policies and procedures. The committee considered metrics of quality or quality domains that would be useful in determining the adequacy of the adjudication process for residuals of TBI. There is no single metric that captures the overall quality or performance of the disability determination process; instead, there are multiple domains that must be considered. These include the burden to the veteran associated with the evaluation, the transparency and credibility of the process, and the reliability and validity of the determinations (see Table 4-1). TABLE 4-1 Examples of Domains of Quality Related to Disability Determinations After TBI Domain or Metric General Definition Description Related to TBI Examinations Burden The effort required to complete a The time, cost, and inconvenience to the task or process. veteran associated with completing the disability determination process. Transparency The degree to which rules and The degree to which the inner workings of the process are provided to the public in disability determination process are made a comprehensible, accessible, and known to the veteran, including details about timely manner. the process and progress of the veteranâs individual disability determination. Credibility The degree with which the process The degree with which the disability inspires believe or faith. determination process is viewed as trustworthy and appropriate to key stakeholders, including veterans, and thus likely to yield a result that is trusted. High consistency of process can result in greater credibility.
A HIGH-QUALITY PROCESS FOR DETERMINING DISABILITY 85 Domain or Metric General Definition Description Related to TBI Examinations Reliability The extent that an instrument or The degree to which repeated evaluations of process yields the same results over the same service member would result in the multiple trials. same disability determination outcome. A high degree of reliability implies low variability from assessment to assessment. For example, inter-rater reliability measures the consistency of the result when a different rater completes a separate, independent assessment. Validity The extent that the instrument The degree to which the results of the measures what it was designed to disability determination process accurately measure. reflect the disability resulting from service- connected TBI. There are multiple subtypes of validity, including content, construct, and criterion validity. A high degree of validity implies a lack of systematic bias. SOURCES: OECD, 2018; Price, 2016. Ideally the disability determination process should excel in each of those domains simultaneously. Many of the domains are related to each other. For example, a reliable and transparent process is more likely to be credible. A determination process cannot be valid without also being reliable. A valid disability evaluation process is one that would yield the âright answer,â i.e., accurately identify and quantify the service-connected TBI-related disability for each veteran evaluated over a wide range of injury severity, veteran characteristics, and geographic locations. Validity requires reliability, but a highly reliable process does not promise validity, i.e., it might consistently yield the same incorrect results. The most highly valid processes are generally built on standardized procedures and use personnel with standardized training and qualifications; however, consistency of process is neither necessary nor sufficient to ensure validity (Sajdak et al., 2013; Wilbur et al., 2018). The reliability of the evaluation process is defined by the consistency of outcome, i.e., the disability determinations themselves. Reliability can be measured for the entire evaluation process or separately for each stage or component, including the diagnosis of TBI, the determination of service connection, and the disability rating. Estimating the reliability of the evaluation process might require a subset of veterans to be evaluated more than once, for example, by different practitioners or at different geographic locations. In estimating the reliability of each stage in the evaluation process and the degree with which each stage of the process supports or compromises overall reliability, it might be informative to replicate each step in the evaluation independently and compare results. The concept of validity can be further divided into three subtypes: content validity, construct validity, and criterion validity (Price, 2016). In the context of a TBI disability evaluation, content validity is the degree with which the determination process appears to measure or incorporate all characteristics of the veteran, the injury and its sequelae, or other factors that would reasonably influence the disability arising from TBI. For example, the Disability Benefits Questionnaire (DBQ) for residuals of TBI requires the examiner to assess 10
86 TRAUMATIC BRAIN INJURY IN VETERANS facets of TBI-attributed cognitive impairment and subjective symptoms and select one answer for each facet that best represents the veteranâs functional status. Do those criteria incorporate all characteristics of the veteranâs injury-related functional status? A process that fails to assess key neurologic functions, such as memory or motor skills, for instance, would lack content validity. Construct validity is the degree to which the results of the disability determination process are consistent with accepted theoretical constructs regarding TBI, its sequelae, and resulting disability (Price, 2016). For example, a process that yields disparate results for veterans with different sequelae of TBI but a similar overall impact on their lives would lack construct validity, as the construct of disability is closely related to the impact that symptoms and deficits have on the lives of affected service members. Another example of poor construct validity would be if the rating process assumed that all sequelae of TBI were immediately apparent, when current knowledge of TBI indicates that manifestations are often delayed. Finally, criterion validity, a subset of construct validity, is the degree with which the results of the disability determination process match a criterion or âgold standardâ that is assumed to define the degree of disability incurred by the veteran (Price, 2016). As there is no clear criterion standard available for disability, the criterion validity of the disability determination process cannot be assessed directly. However, enhanced assessment methods (e.g., by particularly well trained or experienced evaluators, incorporating additional evaluation modalities such as formal neuropsychiatric testing) might yield disability determinations that could serve as criterion standards for evaluating the usual assessment process. A variety of approaches can be used to ensure quality. In considering the different approaches, it is useful to separate approaches that focus on consistency in the process of evaluation (e.g., in the qualifications of personnel, standardized training, defined and consistent workflows) and those that focus on the outcome of evaluation. An example of the latter would be an approach based on assessments of the accuracy of disability determination outcomes against a criterion standard, with feedback to evaluating personnel and other stakeholders, with the goal of reducing variability or the frequency of errors. The appeals process, allowing service members to request a re-evaluation of their disability, might be considered to be a feedback-based system for enhancing quality. In seeking to obtain the same outcome each time a veteran with a stable disability is evaluated, it is important to keep in mind that perfect reliability is an unrealistic goal. Given the nature of the human condition, applicants can vary in their responses to an assessment from day to day or even from minute to minute, depending on internal factors (e.g., having a headache) or external factors (multiple distractions while the evaluation is being performed). Thus, the results of even the most reliable test or evaluation process can vary in response to internal or external factors affecting the applicant. In addition, the examiners and raters may themselves add variability to the process due to internal or external factors to which they may be subject. To address those limitations, the process should be made as independent of subjective judgement as possible in order to reduce variability and allow for the most reliable outcome possible. Different approaches to completing assessments may be taken to get the same result and achieve reliability. While requiring all examiners from all specialties to follow the same standardized process for TBI diagnosis (or identifying residuals of TBI) may not be necessary to ensure reliability of the process, there needs to be a sufficient foundation of accurate information to support the accuracy of the assessments and therefore their quality, i.e., the ability to measure all relevant symptoms and deficits related to TBI. Similarly, it is important that the examiner follow the most recent evidence-based assessment procedures and be aware of areas that are
A HIGH-QUALITY PROCESS FOR DETERMINING DISABILITY 87 mostly likely to be challenging. Nonetheless, if the outcomes of assessments are consistent, regardless of the standardization (or not) of the process, then reliability has been achieved. Still, as previously mentioned, a reliable process is not necessarily a valid one. Even if each clinician making a TBI diagnosis, each examiner completing a DBQ, and each rater making a disability determination follows the same processes in their respective spheres with reliable and reproducible results each time, it will not guarantee that the results of those determinations represent the âtruthâ and are thus valid. It is important to acknowledge the overarching challenge that there is presently no clear consensus as to what accuracy means with respect to the diagnosis of TBI, the set of all possible sequelae, or the best way to quantify the effect of each on disability. Further work is needed to develop and improve criterion standards that can be used to evaluate the validity of each step in the TBI disability evaluation process. Careful consideration should be given to the methods used to evaluate the processes of diagnosis and disability assessment with regard to content, construct, and criterion validity. These processes include the diagnosis of TBI, the determination of service connection, the detection and characterization of sequelae of the TBI (e.g., as documented in the DBQ), and the assessment and quantification of the resulting disability by raters. The goal of the overall process is to yield an evaluation that accurately captures the effects of service-related TBI on disability in veterans. APPROACHES FOR ENSURING QUALITY Broadly speaking, systems have two areas in which quality can be measured: the quality of the systemsâ processes and the quality of their outcomes. Process quality includes the domains of burden, transparency, and credibility, whereas outcome quality includes reliability and validity in its various forms. The variables or metrics that are used to measure quality in each of those domains are called indicators or quality indicators. The indicators of process quality and the indicators of outcome quality are distinct from each other (Mant, 2001). Process quality includes how efficiently the system functions and how well it obeys its own rules. For instance, in the veteransâ benefit determination system, process quality can be measured by ease of access, timeliness of examinations, qualifications of reviewers, and timeliness of appeals. Outcome quality, on the other hand, includes both reliability (e.g., how frequently different examiners reach the same conclusions) and validity (i.e., how accurately the system arrives at the correct answer). A system can be timely, easily accessed, adhere to its own rules, and have few errors detected but consistently fail to produce the right outcome; process quality does not guarantee outcome quality. To ensure and maintain high outcome quality, systems need to measure the quality of outcomes, incorporate feedback, correct themselves, and measure the outcomes after such a correction. Approaches Based on Consistency of Process: Process Quality There are two steps in the rating of service-connected TBI disability, which should be explicitly structured and continually reassessed to assure quality. The first step is the examination by a clinician whose results populate the DBQ, and the second is the rating process that produces the final disability rating from the DBQ and other supporting information. Each of the steps should have distinct process quality indicators. For the examination stage, process indicators could include the qualification of examiners, the ease and timeliness of access, the
88 TRAUMATIC BRAIN INJURY IN VETERANS completeness of the DBQ (e.g., no missing data), the timeliness of DBQ filing, and the transparency and credibility of this step in the evaluation. For the rating process, indicators could include time to initial disability determination, the accuracy of the rating as determined by higher-level review, the credibility and transparency of the rating system, and the timeliness of the appeals process. Ideally, process quality indicators should represent characteristics inherently valuable to veterans and to the process owner (the Department of Veterans Affairs). In contrast with determinations of outcome quality, which require judgments against external or internal assessment standards and focus on deviations from those results, the use of process indicators is relatively straightforward. VBA already measures a number of process quality indicators, as detailed in Chapter 3 and summarized in Table 4-2. TABLE 4-2 Examples of VA Quality Indicators and Measurements Quality VA Quality Domain(s) Indicator Measurement Addressed Adherence to VA The STAR review compares the documentation and outcomes of Credibility, via claim rating randomly selected completed claims to a checklist to determine consistency of policies (described the âaccuracyâ of the claims, meaning their adherence to the VA process, by the VA as policies (M-21, Chapter 3). Data are produced on a monthly basis reliability âaccuracyâ) by each regional office and published on a public-facing website. Claims-based and issue-level accuracy is reported for 3-month and 12-month periods. The claims-based accuracy rate is determined by dividing the total number of error-free cases by the total number of cases reviewed. Issue-level accuracy is a measure of individual medical issues contained within a compensation claim. Consistency in Questionnaires are administered to all raters 3â24 times per year Credibility, via rater decision (GAO, 2014). The questionnaires include a brief scenario on a consistency of making specific medical condition for which raters must answer several process multiple-choice questions. These tools are not validated. Consistency in Percentage of TBI diagnoses made by clinicians with the Credibility, via qualifications for appropriate specialty and proper certification (VA OIG, 2018). consistency of examiners process Access to VA Proportion of veterans who submit a disability claim who are Burden, facilities and seen, measured from administrative databases. The question of transparency information about how difficult it is for the veterans to arrange an appointment disability benefits might require additional investigation, for instance, from client satisfaction surveys. Timeliness Measures of timeliness between any points in the process of Burden disability determination, from initial filing to initial disability examination, from initial disability examination to initial disability determination, and, if appealed, from initial appeal filing to final determination (GAO, 2002, 2018).
A HIGH-QUALITY PROCESS FOR DETERMINING DISABILITY 89 The first of these are the qualifications for the examiners. Under the M21-1 Adjudication Procedures Manual (see Appendix I), if the original diagnosis of TBI is made within the Veterans Health Administration (VHA) or by a VBA contractor, it needs to be made by a physician who is board certified in one of four specialties: neurology, neurosurgery, physical medicine and rehabilitation, or psychiatry. One measure of quality is what percentage of TBI diagnoses have been made by physicians with those qualifications. Another measure of process quality is the percentage of examiners completing the DBQ who have the specialized training required for this role. It is important to note that there is often little or no evidence linking common process indicators (e.g., qualifications of personnel) to outcome quality (e.g., accuracy of disability determination against an accepted criterion standard). A second process quality indicator is access to VA facilities and information about disability benefits. The proportion of veterans who submit a disability claim and who are then seen at a VA facility can be measured from administrative databases. The question of how difficult it is for them to arrange an appointment might require additional investigation, for instance, from client satisfaction surveys. Furthermore, there may be injured veterans who fail to submit a disability claim due to the perceived difficulty of the process or to a belief that a favorable disability assessment is unlikely. The improvement goal here should be to remove barriers to access, whether real or perceived. A third process indicator is timeliness. Timeliness can be measured between any points in the process of disability determination, from initial filing to initial disability examination, from initial disability examination to initial disability determination, or, if the rating is appealed, from initial appeal filing to final determination. Measures of timeliness should be possible with data from administrative databases, but they could also be assessed from client satisfaction surveys and standard patient encounters. The transparency of the adjudication process is a key quality characteristic that does not appear to be explicitly addressed by existing VA quality indicators. Transparency is often a requirement for credibility and should be considered from the points of view of both the individual veteran and the system. Transparency from the point of view of the individual veteran would include, for example, access to the details of his or her individual application (e.g., results of the veteranâs examination as documented on the DBQ, details regarding additional materials that have been requested by VBA). Transparency from a system-wide point of view would include easy access to and widespread distribution of data on the system performance, including performance with respect to both process quality measures (e.g., timeliness of and access to VHA examinations, percent of examinations conducted by contracted examiners) and outcome quality measures (e.g., the consistency of outcomes across geographic regions, the accuracy of disability determinations evaluated using standardized patients, the inter-rater reliability of determinations as assessed through independent examinations and ratings of random cases). The committee found that transparency was inadequately appreciated as a goal by both VHA and VBA personnel. The committee recommends that the Veterans Health Administration (VHA) and Veterans Benefits Administration take specific actions to increase transparency at both individual and system-wide levels, including but not limited to providing full access to veterans of the details of their examinations and ratings and providing public access to detailed system-wide data, with separation by geographic location and examination type (e.g., VHA versus contracting examiner), on the outcomes of evaluations and outcome quality.
90 TRAUMATIC BRAIN INJURY IN VETERANS Concerning who can make the diagnosis of TBI, it might seem reassuring to restrict this role to physicians who are board certified in neurology, neurosurgery, physical medicine and rehabilitation or psychiatry, as their training in brain pathophysiology and clinical care implies that they should have a good understanding of TBI. However, having had the basic training in the recent or remote past does not necessarily mean that those clinicians are currently capable of making the TBI determination accurately or that they have the time and motivation required to accurately make or rule out the diagnosis. Those clinicians might have chosen to focus on another aspect of their vast fields and subspecialized in an area of greater interest to them, whereas practitioners who are board certified in another specialist area may have a keen interest in TBI and thus be more likely to be abreast of the current controversies and the latest evidence- based diagnostic and treatment practices and be in a better position to make such a determination. Additionally, that requirement for board certification increases the burden on veterans by limiting the supply of individuals qualified to perform their assessment. The specialty of a practitioner does not necessarily ensure the accuracy of the assessment. Not just specialization but also knowledge, training, experience, and interest should be taken into account. A well-outlined and detailed process in the hands of an inexperienced examiner might result in less reliable determinations than a less detailed process executed by an experienced physician who has diagnosed and treated a large number of TBI cases in his or her professional career. On the other hand, an inexperienced examiner with an intense interest in the topic may provide more reliable determinations than an experienced examiner with less interest and less time in which to complete the evaluation. A basic understanding of the pathophysiology of TBI and of the proximal and distal signs and symptoms associated with this diagnosis is necessary for an accurate diagnosis. However, the committee is unaware of any data supporting the current emphasis placed on the specialty of the examiner or on using the same consistent process among all examiners within the same discipline or among disciplines. Thus, the committee recommended in Chapter 2 that the VA should reconsider its decision and allow any clinician with specialized training in TBI to be able to make the diagnosis. Approaches Based on Repetition or Comparison to Other Standards: Outcome Quality Validity and reliability must be defined in terms of the outcome of the assessment rather than just by the consistency of the process; consistency of process is neither necessary nor sufficient to ensure reliability or validity. In other words, the key question is if the approaches, whether all the same or different, lead to an assessment that is repeatable and that accurately reflects the disability associated with TBI. Emphasizing consistency of process and qualifications of practitioners does not ensure the reliability or the validity of the assessments, and, just as important, a lack of consistency of process and qualifications of practitioners does not mean there was a lack of validity. It is likely that those factors are indeed related to assessment performance, but it is not guaranteed to be true. If different assessment paths (providers, tools, locations) all lead to the same final disability assessments, then the fact that the processes are different should be of little concern. Admittedly, variability in assessments processes may negatively affect the credibility of the process; however, if the validity of the outcome is consistently high, then the process is of lesser importance. The biggest challenge is determining what âaccuracyâ means within this context and providing a practical and widely accepted criterion standard assessment against which the disability rating system can be judged.
A HIGH-QUALITY PROCESS FOR DETERMINING DISABILITY 91 Conceptually, measurement or assessment error can be separated into two types: random variability and systematic error or bias (Bhattacherjee, 2012). Random variability that leads to a reduction in measurement reliability is generally addressed through ensuring that the process is consistent, assessing the sources of random variability, and emphasizing process modification and improvement activities. Systematic error or bias refers to consistent differences between individual disability determinations and the corresponding criterion standard for each determination, i.e., a general under- or over-estimate of the TBI-associated disability. Bias can be quantified in terms of an average or as a median difference from the criterion standard assessment. The presence of consistent biasâe.g., substantively lower disability scores than the national median from one examiner or in one centerâsuggests a target for quality improvement; this presumes, however, that the national median is consistent between locations. Finally, the committee notes that the VAâs approach is designed to favor veterans if âa reasonable doubt arises regarding service origin, the degree of disability, or any other pointâ (VA, 2001).1 Random assessment error will always exist to some extent in the disability determination system. Sources of this variability can include examiners, instruments, record availability, raters, and veteransâ understanding of what is being asked of them and why. One way to think of random assessment error is to consider the hypothetical distribution of disability scores obtained if the same veteran presented 100 times with the same underlying disability, undergoing evaluation by 100 different unbiased examiners with evaluations that were then rated by 100 different raters. That process would yield 100 completely independent evaluations of the same underlying disability. Different examiners are likely to have slightly different findings, and certain raters might rate the disability below the median and others above the median. Reducing this random variabilityâthat is, narrowing the distributionâis one goal of quality improvement. Practical assessments of random measurement error can be made with many fewer assessments (e.g., duplicate evaluations). The committee noted that existing quality indicators and processes for ensuring quality within both VHA and VBA do not address the quality of outcomes as defined by either the reliability of the outcomes, as assessed by independent evaluations, or the validity of outcomes. The committee recommends that the Department of Veterans Affairs institute processes and programs to measure the reliability and validity of the adjudication process, identify opportunities for improvement in the quality of outcomes, and implement modifications of the adjudication process as needed to optimize the quality of both the adjudication process and the reliability and validity of the outcomes. The committee further recommends that the VA take the following initial, specific actions to evaluate the reliability and validity of disability determinations: 1. The VA should implement a program using âstandard patientsâ to evaluate the existing examination system and the completion of the DBQ. Specifically, standard patients would be professional actors or people portraying veterans with disability claims who have a history of injury and subsequent disability and who are coached to give standard answers and to present with a specific history and physical findings. The purpose of using standard patients is to determine how much variability there is 1 For that reason, the committee did not address the issue of malingering or falsely or grossly exaggerated patient reporting of symptoms, which could affect the validity of the assessment.
92 TRAUMATIC BRAIN INJURY IN VETERANS between the correct, criterion standard outcome that has been determined a priori and what an examiner records on the DBQ. That could be accomplished by a taped interview and examination, which could be viewed and rated by physicians who perform disability exams. Standard patient examinations may be used to identify random variability or systematic errors associated with individual examiners or offices, to measure the overall quality of the system, and to determine the settings in which the rating system is most likely to yield invalid disability ratings (Beullens et al., 1997). The committee most strongly endorses the use of this method. 2. The committee believes the VA should have experienced second-level reviewers independently repeat a random sample of disability determinations and provide disability determinations to be used as criterion standards. The repeat evaluations should include both the disability examination and the rating step. The differences between what an individual examiner and rater, or a group of examiners and raters, determined and the criterion standard then represent variation from the assumed accurate answer. This approach is fundamentally different from and extends existing VBA programs, both at the regional office level through the quality review teams and at the national level through the STAR system (VA, 2018). In both of those review systems, the evaluation is an audit rather than an independent repetition of the entire process, and, even for the rating step, the second rater knows what the first rater found and that raterâs reasons for assigning the rating that he or she assigned. Because the second-level reviewer is aware of the initial reviewerâs findings, this has the potential to introduce confirmation bias into the second determination (Karanicolas et al., 2010). To provide the least biased estimates of outcome quality, the second examiner and rater providing the criterion standard should replicate the entire process independently and be unaware of the first determination and reasoning. In short, there is a need for blind examinations and ratings that can be compared with the initial results, rather than having the second examinations and ratings be biased by those results. 3. The committee believes the VA should institute a system through which the veterans themselves rate the quality of the outcome. Are they satisfied or dissatisfied, and, if dissatisfied, how would they suggest the system be improved? This method is used extensively in a variety of customer-service industries, from medicine to travel, but it requires high rates of reporting to assure that system problems are not overestimated by dissatisfied clients with bad experiences or unrealistic expectations (Crow et al., 2002). A variant on this approach could be to examine the rate of applications for appeals, an administrative mechanism that represents extreme dissatisfaction. Examining appeals by time (to note improvements) and by geographic area (to identify clusters of dissatisfaction with the outcome) might be able to provide some insight into the quality of the disability determinations, that is, the quality of the system. 4. Finally, the VA should collect data on and examine the consistency of outcome determinations across the population of veterans filing claims in a certain year. As service members are drawn from the nation as a whole, one might assume those who have service-related disabilities should be evenly distributed throughout the population. Alternatively, however, more severely disabled veterans may be geographically clustered (e.g., near military bases), and variation by geographic
A HIGH-QUALITY PROCESS FOR DETERMINING DISABILITY 93 location should be carefully considered. The proportion of claims received each percentage disability can be examined as a whole to understand the variability in the process and then examined for subpopulations of examining centers to see where disability ratings may be less likely to be granted or systematically given lower scores. SUMMARY AND RECOMMENDATIONS The committee was tasked to evaluate the âadequacyâ or quality of the adjudication process for impairments resulting from TBI. Building on the descriptions in Chapter 3 of the VAâs adjudication process for the residuals of TBI and its quality assurance measures, this chapter described desirable characteristics of quality indicators that would be beneficial for the VA to monitor and to use to drive improvements in the adjudication process. In Chapter 3 the committee examined the structures that the VA has in place for assuring the quality of its adjudication process and found that although VBA has systems in place to review the consistency of the process, the VA does not measure reliability or validity. The committee noted that in 2007 the Institute of Medicine provided recommendations for VHA to establish a recurring assessment of the substantive quality and consistency, or inter-rater reliability, of examinations performed with the templates, and if the assessment finds problems, take steps to improve quality and consistency, for example, by revising the templates, changing the training, or adjusting the performance standards for examiners and also for VBA to establish âbuilt-in checks or periodic evaluations to ensure inter-rater reliability as well as the accuracy and validity of rating across impairment categories, ratings, and regionsâ (IOM, 2007). The committee supports those recommendations and believes that they were not adequately addressed. The committee discussed several major domains of quality and how they are related to the adjudication process for veteran disability claims, including reliability and validity. A process with high reliability is one in which repeated evaluations of the same service member would result in the same disability rating. An adjudication process with high validity would be one in which the disability rating reflects the true degree of service-connected disability. Ideally, a high- quality adjudication process would excel in both of these quality domains while also being transparent, timely, and credible and minimizing the burden to the veteran. To ensure and maintain high quality, systems need to measure both process and outcome quality, incorporate feedback, correct themselves, and measure outcomes after such a correction. The committeeâs review of the VAâs quality assurance measures found that the VAâs quality measures focus on consistency in the disability rating step of the process. As described in Chapter 3, VBA has implemented measures to ensure the consistency of the rating process. One example of a VA quality measure that focuses on consistency of process but with unclear effect on reliability or validity is the measurement of the fraction of diagnoses of TBI that are made by a physician who is board certified in one of four specialties: neurology, neurosurgery, physical medicine and rehabilitation, or psychiatry. As noted in Chapter 2, while the committee appreciates that an understanding of the pathophysiology of TBI and of the proximal and distal signs and symptoms associated with this diagnosis is necessary for an accurate diagnosis, there need not be an inordinate amount of emphasis placed on the specialty of
94 TRAUMATIC BRAIN INJURY IN VETERANS the examiner or on adherence to this policy if there is no evidence that this will lead to more accurate evaluations of disability. The transparency of the adjudication process is another key quality characteristic. Transparency should be considered from the points of view of both the individual veteran and the system. Transparency from the point of view of the individual veteran would include, for example, access to the details of his or her individual application (e.g., results of the examination as documented on the DBQ, details regarding additional materials that have been requested by VBA). Transparency from a system-wide point of view would include easy access to and widespread distribution of data on the system performance, including both performance with respect to process quality measures (e.g., timeliness of and access to VHA examinations, the percent of examinations conducted by contracted examiners) and outcome quality measures (e.g., the consistency of outcomes across geographic regions, the accuracy of disability determinations evaluated using standardized patients, the inter-rater reliability of determinations as assessed through independent examinations and ratings of random cases). The committee found that transparency was inadequately appreciated as a goal by both VHA and VBA personnel. The committee recommends that the Veterans Health Administration (VHA) and Veterans Benefits Administration take specific actions to increase transparency, at both individual and system-wide levels, including but not limited to providing full access to veterans of the details of their examinations and ratings and providing public access to detailed system-wide data, with separation by geographic location and examination type (e.g., VHA versus contracting examiner), on the outcomes of evaluations and outcome quality. Careful consideration should be given to the methods that the VA uses to evaluate the processes of diagnosis and disability assessment, to include not only the disability rating step, but also the diagnosis of TBI, the determination of service connection, and the detection and characterization of sequelae of the TBI, e.g., as documented in the DBQ. The overall goal of the evaluation is to ensure that the approaches taken by the examiner result in an evaluation that accurately captures the effects of TBI on disability in veterans. The committee recommends that the Department of Veterans Affairs institute processes and programs to measure the reliability and validity of the adjudication process, identify opportunities for improvement in the quality of outcomes, and implement modifications of the adjudication process as needed to optimize the quality of both the adjudication process and the reliability and validity of the outcomes. Four specific recommendations for initial steps to be taken are (1) instituting a program of standard patients to directly measure the reliability and validity of the examination and rating processes for such patients; (2) using experienced, second-level reviewers to conduct fully independent evaluations to evaluate the criterion validity of actual veteransâ evaluations; (3) creating a system by which veterans may rate the quality of their own evaluations; and (4) instituting the systematic and transparent collection and comparison of disability outcome data across geographic regions. Implementing the recommendations contained within this chapter will produce a fundamental enhancement in the methods used by the VA to ensure the quality of disability
A HIGH-QUALITY PROCESS FOR DETERMINING DISABILITY 95 evaluations for TBI. This shift, from a focus on the consistency of the process (e.g., for the rating step in disability determination) and on practitioner qualifications to a focus on the accuracy of the outcome of the evaluation is intended and expected to identify steps or components in the disability evaluation process that warrant improvement. In fact, the identification of such opportunities for improvement will be a key indicator of the success and positive impact of these recommendations in improving the system, rather than a criticism of the current system or the personnel who work within it. Furthermore, by adopting an explicit learning structure in which the reliability and validity of disability determinations are directly assessed, the VA will be able to devote its resources to the modifications and enhancements of the disability evaluation system that will have the greatest impact in improving the service provided to injured veterans. REFERENCES Beullens, J., J. J. Rethans, J. Goedhuys, and F. Buntinx. 1997. The use of standardized patients in research in general practice. Family Practice 14(1):58â62. Bhattacherjee, A. 2012. Social science research: Principles, methods, and practices. Tampa, FL: University of South Florida Tampa. Crow, R., H. Gage, S. Hampson, J. Hart, A. Kimber, L. Storey, and H. Thomas. 2002. The measurement of satisfaction with healthcare: Implications for practice from a systematic review of the literature. Health Technology Assessment 6(32):1â244. GAO (Government Accountability Office). 2002. Claims processing timeliness performance measures could be improved. Washington, DC. https://www.gao.gov/assets/240/236589.pdf (accessed December 7, 2018). GAO. 2014. Improvements could further enhance quality assurance efforts. Washington, DC. https://www.gao.gov/assets/670/667027.pdf (accessed May 18, 2018). GAO. 2018. Improved perfomance analysis and training oversight needed for contracted exams. Washington, DC. https://www.gao.gov/assets/700/694986.pdf (accessed December 7, 2018). Karanicolas, P. J., F. Farrokhyar, and M. Bhandari. 2010. Blinding: Who, what, when, why, how? Canadian Journal of Surgery 53(5):345â348. Mant, J. 2001. Process versus outcome indicators in the assessment of quality of health care. International Journal for Quality in Health Care 13(6):475â480. OECD (Organisation for Economic Co-operation and Development) Statistics Directorate. 2018. OECD glossary of statistical termsâtransparency definition. https://stats.oecd.org/glossary/detail.asp?ID=4474 (accessed November 2, 2018). Price, L. R. 2016. Psychometric methods: Theory into practice. New York: Guilford Publications. Sajdak, R., L. A. Trembath, and K. S. Thomas. 2013. The importance of standard operating procedures in clinical trials. Journal of Nuclear Medicine Technology 41(3):231â233. VA (Department of Veterans Affairs). 2001. 38 CFR 3.102âreasonable doubt. https://www.gpo.gov/fdsys/granule/CFR-2009-title38-vol1/CFR-2009-title38-vol1-sec3-102 VA. 2018. M21-1 adjudication procedures manual, Chapter 6: Quality review team. https://www.knowva.ebenefits.va.gov/system/templates/selfservice/va_ssnew/help/customer/locale/en -US/portal/554400000001018/topic/554400000004049/M21-1-Adjudication-Procedures-Manual (accessed May 11, 2018). VA OIG (Department of Veterans Affairs Office of Inspector General). 2018. Review of Montana board of psychologists complaint and assessment of VA protocols for traumatic brain injury compensation and pension examinations. Washington, DC. https://www.va.gov/oig/pubs/VAOIG-15-01580-108.pdf (accessed December 7, 2018).
96 TRAUMATIC BRAIN INJURY IN VETERANS Wilbur, K., A. Elmubark, and S. Shabana. 2018. Systematic review of standardized patient use in continuing medical education. Journal of Continuing Education in the Health Professions 38(1):3â 10.