assumptions about the cost of treating disease—that lead to conflicting findings. Those differences confuse decision makers, who must grapple with the underlying question of “which is the best strategy?”
Models are like maps. Maps are useful when they serve as guides to underlying territories. A map that is too vague is useless; one that is completely accurate merges with the territory itself and is also useless as a guide. The participants spent considerable time discussing the optimal balance for models along the continuum from rough guide to complete accuracy. They struggled with questions of how detailed CRC models should be if they are to be useful to decision makers and how detailed they can be, given the available information.
Richard Lilford provided valuable perspective with his observation that “modeling is a way of having a conversation.” That is precisely what occurred during the day and a half when modeling teams and experts came together to compare assumptions, results, and the underlying evidence base for modeling. Many participants commented on the value of the conversation for further refinement of their models (in the case of the research teams) and for research ideas (in the case of clinical and epidemiological researchers). The pre-workshop modeling collaboration demonstrated that too many lives and dollars are at stake not to continue to work on understanding and communicating both the strengths and weaknesses of cost-effectiveness models.