outlets send their monthly reports to sponsors (or directly to state agencies in the case of self-sponsored centers), after sponsors send their reports to state agencies, and then again after stage agencies send their report to USDA, making it difficult to analyze anything but state-level national trends. Glantz opined that it would be tremendously helpful if a nationally representative study of CACFP could access some of those raw child-level, outlet-level, and sponsor-level administrative data.
Drawing on lessons learned from a series of studies on the Child Care Development Fund (CCDF) voucher program, Gina Adams and Monica Rohacek discussed key factors likely to shape provider participation (e.g., various provider individual characteristics, and CACFP policies and implementation practices) and ways to measure those factors. Past research by the Urban Institute on the child care voucher system has shown that a similar set of factors impacts both participation (“Are you in?”) and the quality of participation (“If you are in, can you do what you are supposed to be doing?”). As many speakers did throughout the day, Rohacek emphasized the importance of keeping the end in mind, that is, knowing the outcome(s) of interest. For example, is the goal to simply measure participation rates or the quality of participation? Other things to keep in mind are the value of quantitative and qualitative methodology (i.e., they both serve important roles), the importance of knowing whom to survey (i.e., the respondent population), and the reality of heterogeneity (i.e., that there is no single child care system, rather a range of diverse systems).
Arguably one of the most important factors to consider when designing a national study of CACFP is the comparison group, that is, the group of eligible but nonparticipating providers (or participants) to whom the CACFP representative sample of providers (or participants) will be compared. Rupa Datta explained the important role that comparison group data serve in two key quantitative measures of program access and participation: saturation and participation rates. Based on work she has done with the National Survey of Early Care and Education (NSECE), she discussed the anticipated challenge of collecting data not just for the comparison group, but also for CACFP providers. Because of the variable nature of child care providers (centers, licensed homes, unlicensed homes, etc.) and state variability in licensing regulations, the greatest challenge for NSECE has been building a database of providers.
Again, a major theme of not just this session but also the workshop at large was the potential relevance of existing data. Susan Jekielek discussed the relevancy of existing data for two Administration for Children and Families (ACF) early childhood programs that overlap with CACFP: Head Start (and Early Head Start) and the Child Care Subsidy Program. Neither program collects CACFP-specific data, but both collect data that might inform a nationally representative study of CACFP.