increase the use of health care facilities.
The difficulty of measuring quality of care was mentioned during the discussion. It was pointed out that differing client expectations would lead to different assessments of identical services. Also, in the broader context, concern was expressed that the framework of quality of care is not defined by clients, but by researchers. It was suggested that the latter find out what clients view as quality service and develop the measurements around that. The importance of assessing the usefulness of the quality of care data was also mentioned. This type of data is relatively new and needs to studied in depth before more focus groups and follow-up interviews are conducted and analyzed for evaluations. Only when the quality of care data are understood can decisions be made as to whether to fund the improvement of service quality as opposed to quantity of services.
INTERVENTION FOLLOW-UP
Marvin Eisen stated that follow-up data to evaluate the long-term effectiveness of interventions are important for two reasons: (1) to ascertain whether a program intervention has met its explicit objectives set out at the beginning of the program and (2) to evaluate why successful interventions have worked in the context of funding limitations and frustration with social issues that do not seem to be improving. In order to do the latter most effectively, interventions need to target particular groups and data need to be collected on a range of outcome variables that relate to a spectrum of program-input variables. An example
demonstrates why a range of data is important. If the birth rate decreases, it could hypothetically be associated with better access to contraception or better access to abortion; a range of data on contraception, abortion, and child wantedness would be able to illuminate a likely cause. Although it is a great leap to link individual-level outcome data to aggregate rate data, Eisen argued that this should be done and also that program intervention variables should be linked to individual behavioral changes.
Rex Warland discussed some lessons learned concerning the use of follow-up data to assess the impact of program interventions. During the 1980s and early 1990s several surveys--panel and cross-sectional--detailing agricultural production practices were undertaken in Swaziland. Two agricultural interventions were also introduced during this time. The postintervention surveys revealed two major lessons. First, anticipated changes in the target population, due to program interventions, may take place over an extended period of time, and follow-up surveys undertaken too early may fail to measure the full effect of the intervention. A solution may be to have several follow-up surveys at different times following the implementations in order to analyze the evolution of the changes. Second, the analysis of characteristics of those who do not adopt--or who adopt and subsequently abandon--the implemented practices can be extremely valuable for future program implementation strategies. Finally, Warland noted that if results from these follow-up surveys are analyzed and produced in a timely fashion, they can be used as early inputs to policy and help to improve the overall effectiveness of the program.
During the discussion the issue of experimentation was raised. Several participants mentioned obstacles to conducting experiments to evaluate family planning program