Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 24
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop 3 The Elements of Effective Research One of the most applied sessions of the workshop featured a series of presentations on planning and conducting research on the effectiveness of interventions. Describing these methods in a single session is “taking on the impossible,” acknowledged committee co-chair and session moderator Larry V. Hedges of Northwestern University. Students spend a significant portion of their time in graduate school studying these issues. Nevertheless, the organizers of the workshop hoped to at least introduce the major topics that researchers might consider before undertaking this work. To begin the session, Shiva P. Singh, program director in the Division of Minority Opportunities in Research (MORE) in the National Institute of General Medical Sciences at the National Institutes of Health, gave an overview of the historical context under which the 2003 Request for Applications (RFA) was developed, namely the continued underrepresentation of minorities in biomedical and behavioral sciences. He then outlined some of the major questions that the RFA was meant to address, such as the following examples: Can specific forms of teaching, styles of pedagogy, and mentoring be identified that prompt patterns of student engagement that lead to a biomedical or behavioral research career? Are some characteristics of a student more determinative in career choice? Are some characteristics more subject to intervention?
OCR for page 25
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop Can an optimum window for intervention be identified either by student age or level of maturity? Can behavior patterns critical for a successful biomedical or behavioral research career be taught effectively? Can the influence of mentors or other role models be measured, linked to outcomes, and modified? Do research experiences (including collaborations at majority institutions) positively affect career choice, and what are the principal components of these experiences and effects? With respect to the decision to enter (or remain in) a research career, can the influence of peers, family, community, and economics be distinguished, measured, linked to outcomes, and modified? Singh provided data on the research community’s response since the RFA’s 2003 inception, including the number of applications received and funded. He discussed changes that the division has identified since 2003 designed to improve the program. One is to be clearer about what the RFA is designed to produce. “We are interested in empirical—rather than evaluative—research that produces generalizable lessons that may be useful in promoting greater participation of underrepresented minority students in biomedical and behavioral research,” Singh said. He also underscored the importance of future applications’ incorporating a sound, theoretical basis for the hypothesis to be tested; a sample with sufficient statistical power; appropriate comparison or control groups; and rigorous statistical methods. The division also has come to emphasize the importance of a team approach. As Singh explained, “you need people who know how to run a program, people who know how to ask questions, and people who know how to design an experiment and analyze the data…. So a team approach [is necessary]: a collaborative effort among researchers, program administrators, educators, psychologists, sociologists, statisticians, and economists.” The intention of the RFA was to test the assumptions on which the division’s grants were based, said Barry R. Komisaruk, associate dean of the graduate school, professor of psychology, and Board of Governors Distinguished Service Professor at Rutgers University and, in addition, a program director in the MORE Division when the RFA was being developed. Do laboratory experiences, mentoring, academic enrichment, and other interventions really stimulate students to enter careers in biomedical and behavioral research? If so, how do these interventions exert their effects? “What we hoped and we continue to hope is that this research will provide insights
OCR for page 26
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop and modifications in program practices that will increase the entry of students into biomedical and behavior research careers,” Komisaruk said. Komisaruk offered examples of a number of questions that he described as “fundable,” in that they attracted the attention of reviewers and program officers in previous rounds of competition: What were the critical motivating factors—both positive and negative—among those who pursued biomedical research careers as well as those who did not, despite participating in intervention programs? Among recent undergraduates, which factors and experiences affected their decision to enter or avoid a biomedical career, such as the nature of the interactions with their mentors or research experiences? Among graduate students in the biomedical and behavioral sciences, what were the optimal times of their entry into a research laboratory experience, and what are the characteristics of these students and their experiences that may have contributed to their pursuit of graduate study? How are career decisions influenced by providing information to students on the skills necessary for success, such as formulating research questions, laboratory management, bioethics, publishing, grant writing, and scientific presentations? Do hands-on laboratory experiences and laboratory skills acquired as undergraduates affect entry into graduate school? How do students’ perceptions of the social culture of a research-intensive university versus a university that is more balanced between research and teaching affect their career choice? Komisaruk also described some of the major questions reviewers asked of these applications: Is the proposed program research, or is it an assessment or description of a program? Is there a clear rationale for the study? For example, is there a testable hypothesis, or is it just observation? What is the likelihood that the proposed intervention will have a measurable effect? For example, is the duration of the intervention that is proposed so short (minutes, a day, or a brief summer session) that it is unlikely to have a measurable effect on the outcome?
OCR for page 27
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop Are the outcome measures a valid indicator of whether the student will eventually go into a biomedical research career? For example, if the student is given a summer experience or a week-long experience to increase interest in the field, does this produce a long-term effect six months later on the career interest expressed by the student? If it does produce increased interest, does that result in an increase in entry into graduate school or a career? Are the comparison groups appropriate and ethical? If you apply an intervention to some and you don’t apply it to others, are the latter being deprived of a beneficial treatment? Those who want to go into a program may differ in motivation from those who do not actively seek out and choose a program. Which are the appropriate comparison groups—those who are accepted into a program but decline, those who are accepted but cannot participate because of space limitations, or those who are not accepted? Is the research sensitive to the unique social, cultural, economic, and other issues of the groups being studied? Are women and minorities being lumped into the same categories, even though the issues affecting them may be significantly different? Is the design of the questionnaires and interviews appropriate? Are the questionnaires validated? Are the statistical analyses and other analytic techniques appropriate? Are the conceptual basis and the relevant literature for the proposed research made explicit? If it is a multicomponent intervention, how is a critical element identified? For example, how do you differentiate the effects of mentoring versus social support versus research? How do students’ involvement in other programs and activities affect their responses to the program being studied? In focus groups, how do investigators address the possible social pressure against revealing what one doesn’t like? Students may not want to say what they don’t like about the program if they are in the focus group. Are the research findings generalizable to other programs? Will the data obtained from the research program be manageable? For example, a study with 500 hour-and-a-half interviews would generate an enormous amount of qualitative and quantitative research data. Are the interview questions unrealistic? For example, the veracity of recall for adults asked about their elementary school experiences could be questioned.
OCR for page 28
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop Does the principal investigator have a track record with this type of research? If not, does the research team have the necessary expertise? Is the measured outcome relevant? For example, does the number of publications really relate to successful career entry into the field? Has the principal investigator responded adequately to an initial critique of the grant request? Does the proposed study compromise confidentiality? Is the principal investigator sufficiently involved in the research? Is the application a strategy to fund a program rather than a proposal to do research? FORMULATING A RESEARCH QUESTION Formulating a good research question is a topic “that you could say with truth is never taught, and you could say with truth that it is constantly taught,” said Martin M. Chemers, professor of psychology at the University of California, Santa Cruz. Given the “presumptuousness” of trying to speak for all researchers in addressing this topic, Chemers generalized from his own experiences in developing a research project to study minorities in the fields of biomedical and behavioral research. In particular, he emphasized three things that research needs: focus, theories, and competencies. Educational interventions are exceedingly complex. They involve activities associated with the intervention, things that might be measured to see if the intervention is working during the intervention or shortly after, intermediate outcomes, long-term effects, and so on. “You can’t study all of this,” said Chemers. “You have to focus, you have to pick some piece of it to study.” In choosing how to focus a study, researchers almost inevitably peer through the lens of their own expertise. In Chemers’ case, his past work had been focused on leadership—specifically, on people’s beliefs about their ability to be a leader—so he brought this focus to his research. In a study of first-year college students at UC Santa Cruz, he and his colleagues focused on the role that academic self-efficacy played in the students’ performance, health, and adjustment.1 The second point Chemers emphasized is that “without a the- 1 M.M. Chemers, L. Hu, and B.F. Garcia. 2001. Academic self-efficacy and first-year college student performance and adjustment. Journal of Educational Psychology 93(1): 55-64.
OCR for page 29
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop ory, you don’t know what to study among all the things that you could study.” Kurt Lewin, the father of modern social psychology, once said that “there is nothing so practical as a good theory.” For Chemers and his colleagues, this meant developing a framework, or rubric, describing the central features of programs designed to affect the decisions of minorities to enter or not to enter biomedical and behavioral research. In their research, the central theory was that the psychological drivers of these outcomes are related to a student’s belief in his or her ability to do research, which the researchers called “inquiry self-efficacy.” Later, they also sought to measure the extent to which a student felt a sense of belonging and had an identity that was compatible with being a scientist. They also hypothesized that the role of these factors varies by ethnicity and gender and initiated somewhere from six to eight studies to study this question. “Each one of them,” said Chemers, “had a weakness that couldn’t be escaped.” In one study it may have been difficult to find controls, in another the short-term outcomes were difficult to measure, and so on. Their approach was to have the studies overlap, with the methodology of one study at least partly correcting for a problem in another area. “If the results held true over and over again in all these different methodologies, it increased our confidence that what we were finding was real and valid,” Chemers said. They used interviews, case studies, surveys, longitudinal studies, and other research techniques. For example, they longitudinally followed two cohorts that spent four weeks on campus each year as part of a high school science program. They also looked across 14 different programs on campus that support underrepresented students in science careers. They also sought to probe the abilities students acquired as part of their education. In one set of measures, they had students engage in simulations that measured their ability to take a set of data, analyze those data, draw conclusions, and recognize the assumptions and limitations underlying those conclusions. The breadth of Chemers’ research highlights the third point he made: “Unless you already know everything, which is rare among many of us, bring the relevant expertise to your team.” When he received the RFA from the MORE Division, said Chemers, [I]t rang a bell. It had a common overlap with things I had already done. But I recognized that [my previous research] was only one piece of it, and there were a lot of other areas that fit in, that were either related to efficacy or were outcomes of efficacy, where I wasn’t an expert. So I brought in people. I elicited help from people in the natural sciences to help me identify the nature of scientific
OCR for page 30
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop inquiry in the natural sciences. I brought in people who were expert in scientific learning assessment…. I brought in a specialist in mentoring. Finally, I enlisted the support of a statistical consultant, a faculty member in my department who was very good at this. So even though I have been a social psychologist for 39 years and done many, many studies, it was clear to me that I didn’t have the range of skills for myself that would do this. The research team was divided into subgroups with overlapping memberships that met more frequently than the complete team. “It is like leading an army,” Chemers said. “We have faculty from psychology, education, chemistry and economics. We have graduate students from psychology, education, chemistry and earth sciences. You want to talk about cultural differences—you have some vast cultural differences between the social sciences and the natural sciences.” Nevertheless, Chemers stated that he believes there is no difference in the basic scientific method between the social sciences and the natural sciences. In both areas, “rigor means that there can’t be competing explanations for what you find. You have to design a study so that at the end you can say, ‘this is what we found,’ and when people say ‘it might have been this, it might have been that,’ you say, ‘no, we controlled for this, we measured that, it can’t be those things.’” Even though the social sciences and the natural sciences may use different methods, “the point still holds that controls help you know whether what you found is accurate.” “One of the most valuable pieces to this entire study was that we developed an atmosphere of mutual respect” among scientists from different disciplines, Chemers said. “We could ask questions about each other’s work. We could say ‘I don’t see how that works,’ and we were open to hearing.” This research “has been one of the most complex and challenging projects that I have ever been involved with—and also one of the most exciting and most rewarding,” Chemers concluded. DESIGNING RESEARCH PROCEDURES Research design has two integrated components, said Hedges, who gave the presentation at the workshop on designing research procedures: (1) a strategy for data collection and (2) a coordinated strategy for data analysis and interpretation that is designed to answer research questions. In that regard, research design needs to be tightly coupled with the formulation of the research question. “In fact, one of the great weaknesses of research proposals that I have
OCR for page 31
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop seen—not only in this program, but in others, as well—is the failure to tightly couple problem formulation and research design and analysis,” Hedges said. “When you bring together a team of people and have one person write each of those three sections, you very often get a proposal in which those don’t articulate very well.” Research design requires creating a “logic of inquiry,” according to Hedges, that explicates how the empirical evidence being collected implies an answer to a research question. This logic of inquiry needs to be situated within a knowledge base, since, as he noted, “you have to start from somewhere” and make explicit why the collection of a particular set of data is relevant to the question. “The logic of inquiry provides a kind of argument about how empirical evidence is going to be used to shed light on the research question,” said Hedges. The logic of inquiry can rely on qualitative or quantitative measures and often involves a mixture of the two. It can rely either on intensive designs that try to capture a lot of empirical evidence about a relatively small number of people, or on extensive designs that collect a smaller amount of data about a larger number of people or a larger number of programs. As also described by Chemers (above), effective research designs often combine elements of different approaches to make up for the weaknesses of each approach. Research design needs to adhere to several fundamental principles, said Hedges, ideas that are “so simple in some ways that I wouldn’t mention them except that I have seen proposals blunder in each of the areas that I am going to mention”: First, variation is essential in order to obtain empirical evidence that relations exist. If researchers study only effective programs, they cannot be sure which features of effective programs do not also exist in ineffective programs. Some variation occurs naturally, while other research designs create variation, as when experiments or quasi-experiments are conducted. Many designs, Hedges pointed out, are hybrids that involve some naturally occurring and some artificially created variation. Second, not all relations are equally sized. “To understand whether or not an effect which might lead to designing an intervention is worth paying attention to,” Hedges observed, “you need to know how big it is.” The size of an effect needs to be compared to other effects or measures to gauge its importance. “Without knowing that, it is hard to say whether that so-called treatment effect is big enough to take seriously or so small as to be unimportant,” Hedges said.
OCR for page 32
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop Third, according to Hedges, when studying developmental processes or the effects of those processes, longitudinal studies are almost always more revealing than cross-sectional studies. “Studying the same people over time and not different groups of people who happen to be different ages has been incredibly important in various areas of social research, and revealed things that weren’t known before,” Hedges said. A classic example is the study of poverty in the 1960s, where cross-sectional comparisons largely overlooked the fact that many people cycle in and out of poverty, which leads to quite different understandings of what poverty is and how to address it. In looking for the causes of particular effects, Hedges pointed out that Thomas Cook and Donald Campbell of Northwestern University and William Shadish of the University of California, Merced, have developed a framework for thinking about research design.2 Their framework involves four classes of validity: statistical conclusion validity, internal validity, external validity, and construct validity of cause. Statistical conclusion validity focuses on whether the relation between variables observed in a study is accurate. For example, are the measures being used reliable enough to permit the relation to be observed in the first place, are the analytic methods appropriate for the kind of data that were collected, and were the assumptions made by the analytic procedures met in the data collection process? Internal validity asks whether a relation between variables is causal or just an association. The classic example is the relation between ice cream sales and the monthly homicide rate in major cities. The two are not causally related, but they both increase as temperature increases. “In the warm months, people eat a lot of ice cream and they also commit a lot of crimes, but that doesn’t mean that the relationship between ice cream sales and crime is causal,” said Hedges. Another way in which the internal validity of a design can be compromised is when different treatment groups have different kinds of students. “If the best students wind up selecting themselves into an intervention, the intervention is going to look better than it deserves to look in a certain sense, unless you find a way to take that into account,” Hedges said. 2 W.R. Shadish, T.D. Cook, and D.T. Campbell. 2002. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin.
OCR for page 33
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop External validity involves generalizability. If an intervention is identified as causally related to an outcome, would that relation generalize to other settings and other individuals? For example, if researchers work in a setting that is very unusual or with participants that are highly unusual, the results may not generalize. Construct validity of cause asks whether the “active ingredient” in an intervention has been correctly identified. “Since most treatments that we have been talking about today are not one thing but a bundle of things, the problem of trying to sort out which of the things in the bundle, including things that you might not even have intended to put into the bundle but are just incidental features of the bundle, are the actual ingredients that produce the effect is the problem of sorting out construct validity of cause,” said Hedges. Randomized experiments can help sort out these factors, but they don’t necessarily protect against misattributing cause. For example, people who know they are in a control group may try harder just because they are in a control group. Or a control group may be demoralized by having been denied something that they thought was valuable. Different study designs have different strengths, Hedges pointed out. For example, observational studies that take advantage of naturally occurring variation can be subject to confounding variables that threaten their internal validity. Researchers can try to control for this, but, as Hedges pointed out, “how can you know that you controlled for all of the possible confounding variables?” In contrast, a randomized experiment can control for confounders that even the researchers haven’t identified. “So the big strength of randomized experiments is that they have high internal validity,” said Hedges. “Their big weakness is they are usually only performable… with oddly selected samples that make it somewhat more difficult to claim that there is ready generalizability.” Similarly, ethnographic designs can offer insight about known mechanisms, uncover new mechanisms, and test many hypotheses in a single investigation. But sometimes their internal validity is not high, and they can be difficult to generalize since they often involve small and unusual samples. “No research design is perfect,” Hedges said. “You need to know that yourself. [And] if you are planning to get funding for your research, it is probably wise to admit it to others as well. Reviewers and other sophisticated critics know that no design is perfect, and their question to you in evaluating your design is usually whether you know it is imperfect and [whether] you have a strategy
OCR for page 34
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop for dealing with the imperfections, like a series of studies that each have somewhat different flaws.” Hedges also discussed the pros and cons of using research design consultants. No one person is likely to have all of the skills needed to develop an optimal research design, so a team could involve a research design consultant. “I have been in that role quite a few times in my life, and I would argue that people who do that kind of thing can be helpful,” said Hedges. “But based on my own experience with this, and the experience of others who have played this role, you have to be involved early in the planning and research project to be most helpful. The worst thing in the world you can do is hire a very good person or engage a very good person to join you so late in the project that he or she can’t really help you very much in planning the design and thinking through various aspects of the project.” Similarly, research design consultants have to be able to learn a lot about a research project to be helpful, even though they will never know as much about the research as the original investigators. ANALYZING THE DATA The most important thing about the statistical analysis of data, said Kenneth I. Maton, professor of psychology at the University of Maryland, Baltimore County (UMBC), is that statistical methods need to be built into a research project from the beginning. They need “to flow directly from the research questions that you are asking. That is the number-one rule,” Maton said. “The techniques that you apply to analyze your data should be those that are appropriate to answer the questions you are asking.” Maton used as an example the analyses he and his students have conducted using data gathered from research focused on the Meyerhoff Scholarship Program at UMBC, which is a comprehensive program for high-achieving high school students who are interested in pursuing doctoral study in the sciences or engineering and who are interested in the advancement of minorities in science and engineering.3 Maton’s group has developed survey items that assess student experiences in the various program components that could affect outcomes. These components range from formal activities like summer bridge programs, to summer research experiences, to 3 K.I. Maton, F.A. Hrabowski III, and C.L. Schmitt. 2000. African American college students excelling in the sciences: College and postcollege outcomes in the Meyerhoff Scholars Program. Journal of Research in Science Teaching 37(7): 629-654.
OCR for page 35
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop social interaction with other Meyerhoff students. To reduce these items into a useable scale and relate them to outcomes, he and his colleagues performed what is called a factor analysis, which is a form of data reduction. There are several ways of conducting such an analysis, but the ultimate result is to show which subsets of items form groups that are more closely associated with other members of the group and less closely associated with the other factors. For example, major aspects of the Meyerhoff program that include financial support, study groups, the summer bridge program, and the quality of interactions with other students in the program form a cluster. More specific aspects of the program associate on another scale, including students’ involvement with the community, cultural activities, and mentoring and advising by Meyerhoff staff. Interestingly, the summer research activities were not closely associated with either set of items, thus constituting a unique and separate aspect of the student experience. In general, “data reduction is one important thing that you want to consider if you are going to be using survey items,” said Maton. Another form of data analysis is to compare the experiences and outcomes of different groups. For example, the Meyerhoff program was originally designed for African Americans, but concern about possible legal challenges led to the program being offered to others as well. One analysis of the program compared the experiences of African Americans with those of other groups, including Asian American and white students, with the hypothesis being that African Americans would have a greater sense of support and belonging from the program since it was designed for them. But the comparison revealed that the groups scored equally on this measure. The technique used in this analysis, which is known as analysis of variance, “is a way to look at group differences,” said Maton. “[It] allows you to say whether the differences in the mean levels of the groups are great enough, given the amount of variation within each group, so that statistically you would say that, ‘probably, the difference is not due to chance.’” Analysis of variance can be used when the measures are continuous, but in many cases the variables being studied take discrete values, such as whether a student does or does not go on to graduate school, or whether a student graduates or not. For example, Maton’s research group has compared the Meyerhoff students to students who were accepted into the program but decided to attend a different institution instead, using categorical outcome variables, such as whether the students graduated in a STEM discipline. Maton’s group found that a much higher percentage of the Meyerhoff stu-
OCR for page 36
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop dents graduated after four or five years in a STEM discipline—83 percent—compared with 46 percent who declined the offer. In this case, the researchers used a technique called chi square analysis to determine whether this difference is statistically significant. They also were able to look at other possible factors between the two groups of students that might have contributed to this difference besides their experiences in the program, such as differences in grade point averages, parental socioeconomic class, and so on. They then used an analysis of covariance to study whether some of these factors might have been confounding variables. (For example, Maton pointed out, analysis of covariance could identify temperature as the confounding variable in a study relating level of ice cream sales to homicide rates, as described by Hedges in an earlier presentation.) Another important research focus is predictors of outcome, which measures whether a given variable contributes to an outcome. Multiple linear regression analysis is used to examine which predictor variables contribute to a continuous outcome, whereas logistic regression analysis is used to examine the relationship between predictor variables and categorical outcomes. For example, a logistic regression analysis showed that African American Meyerhoff students, who had lower average SAT scores, were just as likely to gain entrance into a doctoral program as white and Asian American students. “Even though they come in with lower standardized test scores, they are just as likely to go into a doctoral program as these other students,” said Maton. “This is a really good finding for the Meyerhoff program.” Maton also emphasized the importance of bringing in outside experts to the research team. “The major take-home message is … if you don’t have the expertise yourself, you want to bring in consultants to work with you. [And] you want to make sure they understand your project enough and your goals enough so that they can provide useful and helpful consultation,” he said. In addition, Maton stressed the value of combining quantitative data with qualitative data: “I am a firm believer in combining the two.” For one thing, the qualitative data help support the quantitative data. With the Meyerhoff program, the combination “helps me believe that this program is affecting these youth,” said Maton. “When we talk to them, when we do focus groups with them, when we do ethnographic observation with them, you can see that there is something going on, that these students are developing an identity as African American [science] students, that they want to go out and do something in the world in terms of STEM…. when they talk about the Meyerhoff program, they talk about the fact that they
OCR for page 37
Understanding Interventions that Encourage Minorities to Pursue Research Careers: Summary of a Workshop feel supported, that they feel inspired, and that they feel incredibly challenged, but also incredibly supported by the program.” Qualitative data analysis does not consist simply of reading through the transcripts of a set of interviews or focus groups. Through a very labor intensive process, codes are developed to analyze the transcript contents. For example, the codes used in a study of educational interventions might relate to the mention of self-efficacy beliefs, a sense of belonging, mentoring, or the presence of role models. The interviews are analyzed, coded, and rechecked. Themes are developed that connect the codes, including negative cases where researchers scour the data set for counterexamples. Software packages bring power to these analyses, because these packages can systematically pull up material that is coded in particular ways. “It is an iterative process where you are recording, reworking your codes, reworking your themes,” said Maton. “In the ideal world, you share your themes with the participants who took part in the interviews and took part in the focus groups. You get some checks from [them and] others, and you always have multiple people working on the project and providing different perspectives. So you can do it more systematically rather than less systematically, but it should be done in a team effort with multiple people involved and multiple ways to check the data.”