Site Visit

  • Is the project being implemented as advertised?

  • What is the intervention to be evaluated?

  • What outcomes could be assessed? By what measures?

  • Are there valid comparison groups?

  • Is random assignment possible?

  • What threats to a sound evaluation are most likely to occur?

  • Are there hidden strengths in the project?

  • What are the sizes and characteristics of the target populations?

  • How is the target population identified (i.e., what are eligibility criteria)? Who/what gets excluded as a target?

  • Have the characteristics of the target population changed over time?

  • How large would target and comparison samples be after one year of observation?

  • What would the target population receive in a comparison sample?

  • What are the shortcomings/gaps in delivering the intervention?

  • What do recipients of the intervention think the project does?

  • How do they assess the services received?

  • What kinds of data elements are available from existing data sources?

  • What specific input, process, and outcome measures would they support?

  • How complete are data records? Can you get samples?

  • What routine reports are produced?

  • Can target populations be followed over time?

  • Can services delivered be identified?

  • Can systems help diagnose implementation problems?

  • Does staff tell consistent stories about the project?

  • Are their backgrounds appropriate for the project’s activities?

  • What do partners provide/receive?

  • How integral to project success are the partners?

  • What changes is the director willing to make to support the evaluation?



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement