Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
3 The Elements of Effective Research O ne of the most applied sessions of the workshop featured a series of presentations on planning and conducting research on the effectiveness of interventions. Describing these meth- ods in a single session is âtaking on the impossible,â acknowledged committee co-chair and session moderator Larry V. Hedges of North- western University. Students spend a significant portion of their time in graduate school studying these issues. Nevertheless, the organiz- ers of the workshop hoped to at least introduce the major topics that researchers might consider before undertaking this work. To begin the session, Shiva P. Singh, program director in the Divi- sion of Minority Opportunities in Research (MORE) in the National Institute of General Medical Sciences at the National Institutes of Health, gave an overview of the historical context under which the 2003 Request for Applications (RFA) was developed, namely the con- tinued underrepresentation of minorities in biomedical and behav- ioral sciences. He then outlined some of the major questions that the RFA was meant to address, such as the following examples: â¢ Can specific forms of teaching, styles of pedagogy, and men- toring be identified that prompt patterns of student engagement that lead to a biomedical or behavioral research career? â¢ Are some characteristics of a student more determina- tive in career choice? Are some characteristics more subject to intervention? 24
THE ELEMENTS OF EFFECTIVE RESEARCH 25 â¢ Can an optimum window for intervention be identified either by student age or level of maturity? â¢ Can behavior patterns critical for a successful biomedical or behavioral research career be taught effectively? â¢ Can the influence of mentors or other role models be mea- sured, linked to outcomes, and modified? â¢ Do research experiences (including collaborations at majority institutions) positively affect career choice, and what are the princi- pal components of these experiences and effects? â¢ With respect to the decision to enter (or remain in) a research career, can the influence of peers, family, community, and economics be distinguished, measured, linked to outcomes, and modified? Singh provided data on the research communityâs response since the RFAâs 2003 inception, including the number of applica- tions received and funded. He discussed changes that the division has identified since 2003 designed to improve the program. One is to be clearer about what the RFA is designed to produce. âWe are interested in empiricalârather than evaluativeâresearch that pro- duces generalizable lessons that may be useful in promoting greater participation of underrepresented minority students in biomedi- cal and behavioral research,â Singh said. He also underscored the importance of future applicationsâ incorporating a sound, theoretical basis for the hypothesis to be tested; a sample with sufficient statisti- cal power; appropriate comparison or control groups; and rigorous statistical methods. The division also has come to emphasize the importance of a team approach. As Singh explained, âyou need people who know how to run a program, people who know how to ask questions, and people who know how to design an experiment and analyze the data. . . . So a team approach [is necessary]: a collaborative effort among researchers, program administrators, educators, psycholo- gists, sociologists, statisticians, and economists.â The intention of the RFA was to test the assumptions on which the divisionâs grants were based, said Barry R. Komisaruk, associate dean of the graduate school, professor of psychology, and Board of Governors Distinguished Service Professor at Rutgers University and, in addition, a program director in the MORE Division when the RFA was being developed. Do laboratory experiences, mentor- ing, academic enrichment, and other interventions really stimulate students to enter careers in biomedical and behavioral research? If so, how do these interventions exert their effects? âWhat we hoped and we continue to hope is that this research will provide insights
26 UNDERSTANDING INTERVENTIONS and modifications in program practices that will increase the entry of students into biomedical and behavior research careers,â Komisa- ruk said. Komisaruk offered examples of a number of questions that he described as âfundable,â in that they attracted the attention of reviewers and program officers in previous rounds of competition: â¢ What were the critical motivating factorsâboth positive and negativeâamong those who pursued biomedical research careers as well as those who did not, despite participating in intervention programs? â¢ Among recent undergraduates, which factors and experi- ences affected their decision to enter or avoid a biomedical career, such as the nature of the interactions with their mentors or research experiences? â¢ Among graduate students in the biomedical and behavioral sciences, what were the optimal times of their entry into a research laboratory experience, and what are the characteristics of these stu- dents and their experiences that may have contributed to their pur- suit of graduate study? â¢ How are career decisions influenced by providing informa- tion to students on the skills necessary for success, such as formulat- ing research questions, laboratory management, bioethics, publish- ing, grant writing, and scientific presentations? â¢ Do hands-on laboratory experiences and laboratory skills acquired as undergraduates affect entry into graduate school? â¢ How do studentsâ perceptions of the social culture of a research-intensive university versus a university that is more bal- anced between research and teaching affect their career choice? Komisaruk also described some of the major questions review- ers asked of these applications: â¢ Is the proposed program research, or is it an assessment or description of a program? â¢ Is there a clear rationale for the study? For example, is there a testable hypothesis, or is it just observation? â¢ What is the likelihood that the proposed intervention will have a measurable effect? For example, is the duration of the inter- vention that is proposed so short (minutes, a day, or a brief sum- mer session) that it is unlikely to have a measurable effect on the outcome?
THE ELEMENTS OF EFFECTIVE RESEARCH 27 â¢ Are the outcome measures a valid indicator of whether the student will eventually go into a biomedical research career? For example, if the student is given a summer experience or a week-long experience to increase interest in the field, does this produce a long- term effect six months later on the career interest expressed by the student? If it does produce increased interest, does that result in an increase in entry into graduate school or a career? â¢ Are the comparison groups appropriate and ethical? If you apply an intervention to some and you donât apply it to others, are the latter being deprived of a beneficial treatment? Those who want to go into a program may differ in motivation from those who do not actively seek out and choose a program. Which are the appropriate comparison groupsâthose who are accepted into a program but decline, those who are accepted but cannot participate because of space limitations, or those who are not accepted? â¢ Is the research sensitive to the unique social, cultural, eco- nomic, and other issues of the groups being studied? â¢ Are women and minorities being lumped into the same cat- egories, even though the issues affecting them may be significantly different? â¢ Is the design of the questionnaires and interviews appropri- ate? Are the questionnaires validated? Are the statistical analyses and other analytic techniques appropriate? â¢ Are the conceptual basis and the relevant literature for the proposed research made explicit? â¢ If it is a multicomponent intervention, how is a critical ele- ment identified? For example, how do you differentiate the effects of mentoring versus social support versus research? â¢ How do studentsâ involvement in other programs and activi- ties affect their responses to the program being studied? â¢ In focus groups, how do investigators address the possible social pressure against revealing what one doesnât like? Students may not want to say what they donât like about the program if they are in the focus group. â¢ Are the research findings generalizable to other programs? â¢ Will the data obtained from the research program be man- ageable? For example, a study with 500 hour-and-a-half interviews would generate an enormous amount of qualitative and quantitative research data. â¢ Are the interview questions unrealistic? For example, the veracity of recall for adults asked about their elementary school experiences could be questioned.
28 UNDERSTANDING INTERVENTIONS â¢ Does the principal investigator have a track record with this type of research? If not, does the research team have the necessary expertise? â¢ Is the measured outcome relevant? For example, does the number of publications really relate to successful career entry into the field? â¢ Has the principal investigator responded adequately to an initial critique of the grant request? â¢ Does the proposed study compromise confidentiality? â¢ Is the principal investigator sufficiently involved in the research? â¢ Is the application a strategy to fund a program rather than a proposal to do research? Formulating a Research Question Formulating a good research question is a topic âthat you could say with truth is never taught, and you could say with truth that it is constantly taught,â said Martin M. Chemers, professor of psychol- ogy at the University of California, Santa Cruz. Given the âpresump- tuousnessâ of trying to speak for all researchers in addressing this topic, Chemers generalized from his own experiences in developing a research project to study minorities in the fields of biomedical and behavioral research. In particular, he emphasized three things that research needs: focus, theories, and competencies. Educational interventions are exceedingly complex. They involve activities associated with the intervention, things that might be measured to see if the intervention is working during the inter- vention or shortly after, intermediate outcomes, long-term effects, and so on. âYou canât study all of this,â said Chemers. âYou have to focus, you have to pick some piece of it to study.â In choosing how to focus a study, researchers almost inevitably peer through the lens of their own expertise. In Chemersâ case, his past work had been focused on leadershipâspecifically, on peopleâs beliefs about their ability to be a leaderâso he brought this focus to his research. In a study of first-year college students at UC Santa Cruz, he and his colleagues focused on the role that academic self-efficacy played in the studentsâ performance, health, and adjustment. The second point Chemers emphasized is that âwithout a the- â M.M. Chemers, L. Hu, and B.F. Garcia. 2001. Academic self-efficacy and first-year college student performance and adjustment. Journal of Educational Psychology 93(1): 55-64.
THE ELEMENTS OF EFFECTIVE RESEARCH 29 ory, you donât know what to study among all the things that you could study.â Kurt Lewin, the father of modern social psychology, once said that âthere is nothing so practical as a good theory.â For Chemers and his colleagues, this meant developing a framework, or rubric, describing the central features of programs designed to affect the decisions of minorities to enter or not to enter biomedical and behavioral research. In their research, the central theory was that the psychological drivers of these outcomes are related to a studentâs belief in his or her ability to do research, which the researchers called âinquiry self-efficacy.â Later, they also sought to measure the extent to which a student felt a sense of belonging and had an identity that was compatible with being a scientist. They also hypothesized that the role of these factors varies by ethnicity and gender and initiated somewhere from six to eight studies to study this question. âEach one of them,â said Chemers, âhad a weakness that couldnât be escaped.â In one study it may have been difficult to find controls, in another the short-term out- comes were difficult to measure, and so on. Their approach was to have the studies overlap, with the methodology of one study at least partly correcting for a problem in another area. âIf the results held true over and over again in all these different methodologies, it increased our confidence that what we were finding was real and valid,â Chemers said. They used interviews, case studies, surveys, longitudinal studies, and other research techniques. For example, they longitudinally followed two cohorts that spent four weeks on campus each year as part of a high school science program. They also looked across 14 different programs on campus that support underrepresented students in science careers. They also sought to probe the abilities students acquired as part of their education. In one set of measures, they had students engage in simulations that measured their ability to take a set of data, analyze those data, draw conclusions, and recognize the assumptions and limitations under- lying those conclusions. The breadth of Chemersâ research highlights the third point he made: âUnless you already know everything, which is rare among many of us, bring the relevant expertise to your team.â When he received the RFA from the MORE Division, said Chemers, [I]t rang a bell. It had a common overlap with things I had already done. But I recognized that [my previous research] was only one piece of it, and there were a lot of other areas that fit in, that were either related to efficacy or were outcomes of efficacy, where I wasnât an expert. So I brought in people. I elicited help from people in the natural sciences to help me identify the nature of scientific
30 UNDERSTANDING INTERVENTIONS inquiry in the natural sciences. I brought in people who were ex- pert in scientific learning assessment. . . . I brought in a specialist in mentoring. Finally, I enlisted the support of a statistical consultant, a faculty member in my department who was very good at this. So even though I have been a social psychologist for 39 years and done many, many studies, it was clear to me that I didnât have the range of skills for myself that would do this. The research team was divided into subgroups with overlap- ping memberships that met more frequently than the complete team. âIt is like leading an army,â Chemers said. âWe have faculty from psychology, education, chemistry and economics. We have gradu- ate students from psychology, education, chemistry and earth sci- ences. You want to talk about cultural differencesâyou have some vast cultural differences between the social sciences and the natural sciences.â Nevertheless, Chemers stated that he believes there is no differ- ence in the basic scientific method between the social sciences and the natural sciences. In both areas, ârigor means that there canât be competing explanations for what you find. You have to design a study so that at the end you can say, âthis is what we found,â and when people say âit might have been this, it might have been that,â you say, âno, we controlled for this, we measured that, it canât be those things.ââ Even though the social sciences and the natural sci- ences may use different methods, âthe point still holds that controls help you know whether what you found is accurate.â âOne of the most valuable pieces to this entire study was that we developed an atmosphere of mutual respectâ among scientists from different disciplines, Chemers said. âWe could ask questions about each otherâs work. We could say âI donât see how that works,â and we were open to hearing.â This research âhas been one of the most complex and challeng- ing projects that I have ever been involved withâand also one of the most exciting and most rewarding,â Chemers concluded. Designing Research Procedures Research design has two integrated components, said Hedges, who gave the presentation at the workshop on designing research procedures: (1) a strategy for data collection and (2) a coordinated strategy for data analysis and interpretation that is designed to answer research questions. In that regard, research design needs to be tightly coupled with the formulation of the research question. âIn fact, one of the great weaknesses of research proposals that I have
THE ELEMENTS OF EFFECTIVE RESEARCH 31 seenânot only in this program, but in others, as wellâis the fail- ure to tightly couple problem formulation and research design and analysis,â Hedges said. âWhen you bring together a team of people and have one person write each of those three sections, you very often get a proposal in which those donât articulate very well.â Research design requires creating a âlogic of inquiry,â according to Hedges, that explicates how the empirical evidence being col- lected implies an answer to a research question. This logic of inquiry needs to be situated within a knowledge base, since, as he noted, âyou have to start from somewhereâ and make explicit why the collection of a particular set of data is relevant to the question. âThe logic of inquiry provides a kind of argument about how empirical evidence is going to be used to shed light on the research question,â said Hedges. The logic of inquiry can rely on qualitative or quantitative mea- sures and often involves a mixture of the two. It can rely either on intensive designs that try to capture a lot of empirical evidence about a relatively small number of people, or on extensive designs that collect a smaller amount of data about a larger number of peo- ple or a larger number of programs. As also described by Chemers (above), effective research designs often combine elements of differ- ent approaches to make up for the weaknesses of each approach. Research design needs to adhere to several fundamental prin- ciples, said Hedges, ideas that are âso simple in some ways that I wouldnât mention them except that I have seen proposals blunder in each of the areas that I am going to mentionâ: â¢ First, variation is essential in order to obtain empirical evi- dence that relations exist. If researchers study only effective pro- grams, they cannot be sure which features of effective programs do not also exist in ineffective programs. Some variation occurs natu- rally, while other research designs create variation, as when experi- ments or quasi-experiments are conducted. Many designs, Hedges pointed out, are hybrids that involve some naturally occurring and some artificially created variation. â¢ Second, not all relations are equally sized. âTo understand whether or not an effect which might lead to designing an interven- tion is worth paying attention to,â Hedges observed, âyou need to know how big it is.â The size of an effect needs to be compared to other effects or measures to gauge its importance. âWithout know- ing that, it is hard to say whether that so-called treatment effect is big enough to take seriously or so small as to be unimportant,â Hedges said.
32 UNDERSTANDING INTERVENTIONS â¢ Third, according to Hedges, when studying developmental processes or the effects of those processes, longitudinal studies are almost always more revealing than cross-sectional studies. âStudy- ing the same people over time and not different groups of people who happen to be different ages has been incredibly important in various areas of social research, and revealed things that werenât known before,â Hedges said. A classic example is the study of pov- erty in the 1960s, where cross-sectional comparisons largely over- looked the fact that many people cycle in and out of poverty, which leads to quite different understandings of what poverty is and how to address it. In looking for the causes of particular effects, Hedges pointed out that Thomas Cook and Donald Campbell of Northwestern Uni- versity and William Shadish of the University of California, Merced, have developed a framework for thinking about research design. Their framework involves four classes of validity: statistical con- clusion validity, internal validity, external validity, and construct validity of cause. â¢ Statistical conclusion validity focuses on whether the relation between variables observed in a study is accurate. For example, are the measures being used reliable enough to permit the relation to be observed in the first place, are the analytic methods appropriate for the kind of data that were collected, and were the assumptions made by the analytic procedures met in the data collection process? â¢ Internal validity asks whether a relation between variables is causal or just an association. The classic example is the relation between ice cream sales and the monthly homicide rate in major cities. The two are not causally related, but they both increase as temperature increases. âIn the warm months, people eat a lot of ice cream and they also commit a lot of crimes, but that doesnât mean that the relationship between ice cream sales and crime is causal,â said Hedges. Another way in which the internal validity of a design can be compromised is when different treatment groups have dif- ferent kinds of students. âIf the best students wind up selecting themselves into an intervention, the intervention is going to look better than it deserves to look in a certain sense, unless you find a way to take that into account,â Hedges said. â W.R. Shadish, T.D. Cook, and D.T. Campbell. 2002. Experimental and Quasi- Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin.
THE ELEMENTS OF EFFECTIVE RESEARCH 33 â¢ External validity involves generalizability. If an intervention is identified as causally related to an outcome, would that relation generalize to other settings and other individuals? For example, if researchers work in a setting that is very unusual or with partici- pants that are highly unusual, the results may not generalize. â¢ Construct validity of cause asks whether the âactive ingredi- entâ in an intervention has been correctly identified. âSince most treatments that we have been talking about today are not one thing but a bundle of things, the problem of trying to sort out which of the things in the bundle, including things that you might not even have intended to put into the bundle but are just incidental features of the bundle, are the actual ingredients that produce the effect is the problem of sorting out construct validity of cause,â said Hedges. Randomized experiments can help sort out these factors, but they donât necessarily protect against misattributing cause. For example, people who know they are in a control group may try harder just because they are in a control group. Or a control group may be demoralized by having been denied something that they thought was valuable. Different study designs have different strengths, Hedges pointed out. For example, observational studies that take advantage of natu- rally occurring variation can be subject to confounding variables that threaten their internal validity. Researchers can try to control for this, but, as Hedges pointed out, âhow can you know that you controlled for all of the possible confounding variables?â In contrast, a randomized experiment can control for confound- ers that even the researchers havenât identified. âSo the big strength of randomized experiments is that they have high internal valid- ity,â said Hedges. âTheir big weakness is they are usually only performable. . . with oddly selected samples that make it somewhat more difficult to claim that there is ready generalizability.â Similarly, ethnographic designs can offer insight about known mechanisms, uncover new mechanisms, and test many hypotheses in a single investigation. But sometimes their internal validity is not high, and they can be difficult to generalize since they often involve small and unusual samples. âNo research design is perfect,â Hedges said. âYou need to know that yourself. [And] if you are planning to get funding for your research, it is probably wise to admit it to others as well. Reviewers and other sophisticated critics know that no design is per- fect, and their question to you in evaluating your design is usually whether you know it is imperfect and [whether] you have a strategy
34 UNDERSTANDING INTERVENTIONS for dealing with the imperfections, like a series of studies that each have somewhat different flaws.â Hedges also discussed the pros and cons of using research design consultants. No one person is likely to have all of the skills needed to develop an optimal research design, so a team could involve a research design consultant. âI have been in that role quite a few times in my life, and I would argue that people who do that kind of thing can be helpful,â said Hedges. âBut based on my own experience with this, and the experience of others who have played this role, you have to be involved early in the planning and research project to be most helpful. The worst thing in the world you can do is hire a very good person or engage a very good person to join you so late in the project that he or she canât really help you very much in planning the design and thinking through various aspects of the project.â Similarly, research design consultants have to be able to learn a lot about a research project to be helpful, even though they will never know as much about the research as the original investigators. Analyzing the Data The most important thing about the statistical analysis of data, said Kenneth I. Maton, professor of psychology at the University of Maryland, Baltimore County (UMBC), is that statistical methods need to be built into a research project from the beginning. They need âto flow directly from the research questions that you are ask- ing. That is the number-one rule,â Maton said. âThe techniques that you apply to analyze your data should be those that are appropriate to answer the questions you are asking.â Maton used as an example the analyses he and his students have conducted using data gathered from research focused on the Meyerhoff Scholarship Program at UMBC, which is a comprehen- sive program for high-achieving high school students who are inter- ested in pursuing doctoral study in the sciences or engineering and who are interested in the advancement of minorities in science and engineering. Matonâs group has developed survey items that assess student experiences in the various program components that could affect outcomes. These components range from formal activities like summer bridge programs, to summer research experiences, to â K.I.Maton, F.A. Hrabowski III, and C.L. Schmitt. 2000. African American college students excelling in the sciences: College and postcollege outcomes in the Meyerhoff Scholars Program. Journal of Research in Science Teaching 37(7): 629-654.
THE ELEMENTS OF EFFECTIVE RESEARCH 35 social interaction with other Meyerhoff students. To reduce these items into a useable scale and relate them to outcomes, he and his colleagues performed what is called a factor analysis, which is a form of data reduction. There are several ways of conducting such an analysis, but the ultimate result is to show which subsets of items form groups that are more closely associated with other members of the group and less closely associated with the other factors. For example, major aspects of the Meyerhoff program that include finan- cial support, study groups, the summer bridge program, and the quality of interactions with other students in the program form a cluster. More specific aspects of the program associate on another scale, including studentsâ involvement with the community, cultural activities, and mentoring and advising by Meyerhoff staff. Interest- ingly, the summer research activities were not closely associated with either set of items, thus constituting a unique and separate aspect of the student experience. In general, âdata reduction is one important thing that you want to consider if you are going to be using survey items,â said Maton. Another form of data analysis is to compare the experiences and outcomes of different groups. For example, the Meyerhoff program was originally designed for African Americans, but concern about possible legal challenges led to the program being offered to oth- ers as well. One analysis of the program compared the experiences of African Americans with those of other groups, including Asian American and white students, with the hypothesis being that Afri- can Americans would have a greater sense of support and belonging from the program since it was designed for them. But the compari- son revealed that the groups scored equally on this measure. The technique used in this analysis, which is known as analysis of vari- ance, âis a way to look at group differences,â said Maton. â[It] allows you to say whether the differences in the mean levels of the groups are great enough, given the amount of variation within each group, so that statistically you would say that, âprobably, the difference is not due to chance.ââ Analysis of variance can be used when the measures are con- tinuous, but in many cases the variables being studied take discrete values, such as whether a student does or does not go on to graduate school, or whether a student graduates or not. For example, Matonâs research group has compared the Meyerhoff students to students who were accepted into the program but decided to attend a dif- ferent institution instead, using categorical outcome variables, such as whether the students graduated in a STEM discipline. Matonâs group found that a much higher percentage of the Meyerhoff stu-
36 UNDERSTANDING INTERVENTIONS dents graduated after four or five years in a STEM disciplineâ83 percentâcompared with 46 percent who declined the offer. In this case, the researchers used a technique called chi square analysis to determine whether this difference is statistically significant. They also were able to look at other possible factors between the two groups of students that might have contributed to this difference besides their experiences in the program, such as differences in grade point averages, parental socioeconomic class, and so on. They then used an analysis of covariance to study whether some of these factors might have been confounding variables. (For example, Maton pointed out, analysis of covariance could identify temperature as the confounding variable in a study relating level of ice cream sales to homicide rates, as described by Hedges in an earlier presentation.) Another important research focus is predictors of outcome, which measures whether a given variable contributes to an outcome. Mul- tiple linear regression analysis is used to examine which predic- tor variables contribute to a continuous outcome, whereas logistic regression analysis is used to examine the relationship between predictor variables and categorical outcomes. For example, a logis- tic regression analysis showed that African American Meyerhoff students, who had lower average SAT scores, were just as likely to gain entrance into a doctoral program as white and Asian American students. âEven though they come in with lower standardized test scores, they are just as likely to go into a doctoral program as these other students,â said Maton. âThis is a really good finding for the Meyerhoff program.â Maton also emphasized the importance of bringing in outside experts to the research team. âThe major take-home message is . . . if you donât have the expertise yourself, you want to bring in consul- tants to work with you. [And] you want to make sure they under- stand your project enough and your goals enough so that they can provide useful and helpful consultation,â he said. In addition, Maton stressed the value of combining quantitative data with qualitative data: âI am a firm believer in combining the two.â For one thing, the qualitative data help support the quanti- tative data. With the Meyerhoff program, the combination âhelps me believe that this program is affecting these youth,â said Maton. âWhen we talk to them, when we do focus groups with them, when we do ethnographic observation with them, you can see that there is something going on, that these students are developing an identity as African American [science] students, that they want to go out and do something in the world in terms of STEM. . . . when they talk about the Meyerhoff program, they talk about the fact that they
THE ELEMENTS OF EFFECTIVE RESEARCH 37 feel supported, that they feel inspired, and that they feel incredibly challenged, but also incredibly supported by the program.â Qualitative data analysis does not consist simply of reading through the transcripts of a set of interviews or focus groups. Through a very labor intensive process, codes are developed to analyze the transcript contents. For example, the codes used in a study of educational interventions might relate to the mention of self-efficacy beliefs, a sense of belonging, mentoring, or the presence of role models. The interviews are analyzed, coded, and rechecked. Themes are developed that connect the codes, including negative cases where researchers scour the data set for counterexamples. Software packages bring power to these analyses, because these packages can systematically pull up material that is coded in par- ticular ways. âIt is an iterative process where you are recording, rework- ing your codes, reworking your themes,â said Maton. âIn the ideal world, you share your themes with the participants who took part in the interviews and took part in the focus groups. You get some checks from [them and] others, and you always have multiple peo- ple working on the project and providing different perspectives. So you can do it more systematically rather than less systematically, but it should be done in a team effort with multiple people involved and multiple ways to check the data.â