well. If the apprehensions data from DHS had also been available, the panel might have been able, at least to some extent, to validate (or fail to validate) EMIF-N. The panel would also have been able to evaluate the impact of violating standard model assumptions (e.g., the assumption of a constant population size) on the performance of capture-recapture approaches. More generally, using administrative data collected over several time periods, the panel might have been able to fit models using earlier information and evaluate them by comparing their predictions to observed data from later periods. This out-of-sample validation approach would have allowed the panel to compare the predictive ability of different models and explore the importance of the various assumptions underpinning those models.
Although the panel was aware that DHS has been considering specific modeling approaches (e.g., capture-recapture methods using apprehensions data), it could not get access to the relevant technical reports commissioned by DHS. Because the broader scientific community has not hitherto been engaged with DHS in developing, applying, and continually refining specific modeling approaches, the evidentiary base to which the panel could refer was also limited. For all of these reasons, much of the discussion in this chapter was general in nature. As was discussed in Chapter 5, DHS would benefit from making the administrative data in its enforcement databases publicly available to the research community, even if it were necessary to protect potentially sensitive information through data masking, aggregation, and other such procedures.
• Conclusion 6.1: Modeling approaches, and the assumptions underlying them, must keep track of changing mechanisms of migration and be continually validated against historical trends and data. Since all modeling approaches have their limitations, there is also much that could be learned by comparing estimates from multiple methods.