having a heart attack (Thygesen et al., 2010), but giving troponin to people does not cause a heart attack, according to Quinn. Diagnostic tests, for their part, are useful because they indicate a reliable correlation.

Analytical validity, clinical validity, and clinical utility have limited usefulness, Quinn said. He used a book as an analogy. At one level, a book consists of ink, paper, and glue. At the next level, it consists of words, grammar, and a language. At the next level, it has content, meaning, and some measure of usefulness. But the usefulness of a book cannot be determined by studying its ink, paper, and glue. In the same way, a gene test with lower analytic validity may have a better correlation to a clinical outcome than another test for the same gene with higher analytic validity. The same is true for clinical validity, said Quinn, citing the differences in usefulness of hypothetically similar results between PSA testing and use of the Oncotype DX assay in predicting cancer recurrence. There is only a distant relationship between analytic validity, clinical validity, and clinical utility.

Tests often transform a question that cannot be answered into a question that can be answered, said Quinn. For example, the question “do we need to switch your HIV drug” is transformed to “is your HIV RNA count rising?” The key is the correlation between the answer the test provides and the question that needs to be answered.

There are two kinds of true statements, Quinn observed. The first are statements about things, like this is a rock or you have leukemia. The second are statements about relationships, like there are 10 dimes in a dollar or high troponin levels are associated with heart attacks. Clinical decision making deals with both kinds of statements. There are general medical rules consisting of principles, facts, and conclusions drawn from evidence and there are specific statements about a patient. Evidence-based medicine provides the backing for certain conclusions. The problem, said Quinn, is that medical science is very hard and requires considerable thought and expertise. Some evidence-based medicine may not add value when it is done in an unthinking or brute-force way.

An alternative model that Quinn mentioned is critical reasoning medicine, which combines the ideas of “we can believe this” with “we should do this.” Specific patient facts are combined with clinical rules and knowledge. In turn, this reasoning can be used to support coverage decisions, which separately take into account funds, priorities, and available alternatives, even if complete data are never available when a coverage decision is made.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement