• What are the relevant lessons learned and best practices for improving computational thinking in K-12 education?
• What are examples of computational thinking and how, if at all, does computational thinking vary by discipline at the K-12 level?
• What exposures and experiences contribute to developing computational thinking in the disciplines?
• How do computers and programming fit into computational thinking?
• What are plausible paths and activities for teaching the most important computational thinking concepts?
Robert Tinker, Concord Consortium
Mitch Resnick, Massachusetts Institute of Technology
John Jungck, Beloit College, BioQUEST
Idit Caperton, World Wide Workshop
Committee respondent: Uri Wilensky
The Concord Consortium is a non-profit research and technology development group that focuses on applying technology to improve learning at different grades. Robert Tinker, the founder of the Concord Consortium, argued that computational efforts in K-12 should be integrated around a science focus rather than a focus on either mathematics or engineering.
Elaborating on this argument, he suggested that computational thinking offers an alternative new way of finding out about the world, which is important for citizenship, for future work, and for professionals of all types. Nevertheless, he believes that neither the computer science community nor the education community has yet clearly articulated the essence of computational thinking. As usually presented, computational thinking involves abstractions upon abstractions, which are difficult to make concrete.
At the core of computational thinking, Tinker argued, is the ability to break big problems into smaller problems until one can automate the solutions of those smaller problems for rapid response. (It is for this reason that Tinker believes that engineering is not an appropriate integrating focus for attempts to teach computational thinking—engineering taught at the K-12 level is not particularly amenable to decomposition.) This core, he argued, indicates a possible route for introducing computational thinking into K-12 education.
Tinker’s view is that science is the right focus because modern science often uses computational models that are based on scientific principles and whose use depends on visualizations. Understanding these models requires computational thinking—scientific models and visualizations allow students to visualize the computations that are going on in near real time. Tinker noted that students learn better by seeing models and interacting with them, and that by exploring the model in a spirit of inquiry, they learn about the science in the model in much the same way that scientists learn about nature by using the scientific method. He argued that students can learn complicated, deep concepts this way rather than through the more “off-putting” and often confusing approach of formalistic equations.
Tinker proposed an approach across the K-12 curriculum that uses simple models of scientific concepts such as temperature, light, and force to teach computational thinking. A progression of concepts could start in early elementary grades with basic ideas such as “there are numbers associated with things you observe.” (See Figure 2.1.) In later grades, students might manipulate and refine models to reflect more sophisticated understanding of the concepts represented in the models. Finally, in high
school, a student might be able to select, modify, and apply both hardware and software models as a key part of an extended investigation.
Tinker suggested that the following learning progression could fit into the K-12 curriculum, improve science teaching and learning, and introduce important aspects of computational thinking:
1. There are numeric values associated with every object and their interactions.
2. These values change over time.
3. These changes can be modeled.
4. Models involve lots of simple steps defined by simple rules (e.g., the molecular dance).
5. Models can be tested to find their range of applicability.
6. You can make models.
7. Many other applications of computers share the same features.
When asked whether students perform better when learning through computational modeling and visualization as opposed to a more traditional approach, he replied that such a distinction is not particularly important. Rather than worry whether one method is better than the other, Tinker pointed out that it is a good outcome if a teacher has an additional tool in his or her arsenal to teach a complicated concept.
Tinker noted that because students begin as concrete thinkers, it remains a challenge to identify the age or grade level at which children can handle abstraction. As an example, he said that although he has worked with second graders by hooking up a probe to measure temperature, it is only at fourth grade that students demonstrate reliable results of learning and comprehension with such methods.
According to Tinker, students involved in a very tightly packed K-12 curriculum do not have the time to master programming in order to manipulate models. Rather, he recommends a programming environment such as NetLogo or AgentSheets partially populated with general tools, but still needing interconnection and “tuning,” that were designed to focus users on the concepts represented rather than on the details of programming. Another option is to use an existing piece of software in which the student can manipulate important parameters.
Mitch Resnick of the MIT Media Lab said that computational thinkers must be able to use computational media to create, build, and invent solutions to problems. He framed this approach in terms of students being able to express themselves and their ideas in computational terms, and
emphasized that indeed this should be part of the motivation to learn computational thinking. “When young people learn about language, we don’t just teach them linguistics or grammar; we let them express themselves. We want a similar thing with computational thinking.”
Moreover, he argued, most people work better on things they care about and that are meaningful to them, and so embedding the study of abstraction in concrete activity helps to make it meaningful and understandable.
Resnick pointed out that for students to express themselves meaningfully with computational media, they need to learn new concepts as well as develop new capacities. He argued that computer science classes often overemphasize computational thinking concepts (such as recursion) at the expense of helping students develop computational thinking capacities for design and social cooperation. Computational concepts include concepts such as conditionals, processes, synchronization, and recursion. Design capacities deal with skills like prototyping, abstracting, modularizing, and debugging. Social-cooperative capacities include sharing, collaborating, remixing, and crowd-sourcing. These social-cooperative capacities are becoming increasingly important as new computing and networking technologies open up new possibilities for widespread cooperation.
Resnick’s computational environment of choice for supporting computational expression is Scratch. The MIT Media Lab developed Scratch and a companion online community to help engage people in creative learning experiences and to support the development of computational thinking. Scratch is a graphical programming language, giving the user the ability to build programs by snapping together graphical blocks that control the actions of different dynamic actors on a screen. (Such an approach to program construction enables users to avoid issues of syntax and other details that often distract users from the critical processes of designing, creating, and inventing. Resnick believes this construction process serves an important grounding function for learning abstract computational concepts, making concepts more concrete and understandable.) Scratch also facilitates social cooperation by making it very easy for a user to share his or her design with others for comment and feedback. (See Figure 2.2.)
Resnick provided several examples emphasizing the role of expression through construction and social cooperation from one particular member of the Scratch online community who goes by the username MyRedNeptune. MyRedNeptune was a young student from Moscow and joined the Scratch online community shortly after it went live in 2007. “One of the first projects that she created,” Resnick said, “was a type of interactive greeting card for the holidays.” Each time a person clicked
on one of the reindeer, it would begin playing “We Wish You a Merry Christmas” on a musical instrument in concert with the other animated reindeer. Creating the card required modularization and synchronization, as well as a number of core computational concepts.
Next, MyRedNeptune began offering her consulting services to develop animated characters upon request. Another community member requested that she develop a cheetah for a project. Resnick continued, “She went to the National Geographic website and she found a video of a cheetah. She used that to help guide the graphic of an animation that she developed, and then someone else used her graphic and integrated it into her project.”
When yet another community member requested that she show how she developed her animations, she began to develop tutorials in Scratch on how to program animated characters. One of her first tutorials was on how to animate a bird to make its wings flap back and forth. Later she was asked to participate in an international collaboration with five or six other kids in three different countries. Working together, they developed a type of adventure video game, with each child working on different parts of the activity. Resnick noted: “I think you can see from these examples how MyRedNeptune developed as a computational thinker, learning to think creatively, reason systematically, and work collaboratively.”
Scratch is used both inside and outside formal school curricula. Initially used in homes, after-school centers, community centers, and museums, it is now moving into the schools and is being used today to teach basic concepts in university-level introductory computer science classes in a number of universities.
Resnick shared that “one thing we’ve seen is that different kids have different trajectories. Some will spend a lot of time continuing to work on the same types of projects, over and over. You might think that they are stuck, but there’s a lot of things happening in their minds, and suddenly they’ll start working on new types of projects and ideas.”
Resnick and his colleagues are working on many new initiatives to support the development of computational thinking through Scratch, including an online community (called ScratchEd) specifically for teachers who are helping students learn with Scratch.
John Jungck and his collaborators founded the BioQUEST Curriculum Consortium 24 years ago to bring computation and mathematics into the undergraduate biology curriculum. Jungck noted that although there are many reasons for using computation in biology education, the rationale he presented at the workshop focused on the power of visual-
ization in a biological context. He noted the evolution of paradigms for scientific investigation from empirical (experiment, observation) to theoretical (models, theoretical generalizations) to computational (simulation) to data exploration and e-science (collection of data on a massive scale: exploration facilitated by theory, mathematics, statistics, and computer science). In this context, biology education needs to provide students with ways of understanding biological data—environmental data and genomic data, for example—that is multivariate, multidimensional, and multicausal and that exists at multiple scales in enormous volume (terabytes of data per day).
The philosophy of BioQUEST rests on three pillars:
• Students take an active role in posing problems to examine, much as a scientist has to learn to pose good problems. Good problems must be appealing, have significance, and be feasible to address.
• Students solve problems iteratively. They must learn to appreciate the nature of scientific hypotheses as answers as well as to develop heuristics for achieving closure to scientific problems.
• Students must persuade their peers that a solution is useful and or valid, a process that mirrors the role of publication and extensive peer review in biological research.
The primary challenge for learning in accordance with this philosophy is that in focusing on the student as problem-poser, teachers lose much of the control they traditionally have over the learning process. Students engaging in self-directed collaborative processes may make some teachers uncomfortable. Furthermore, students in this environment may have more technical skill than their teachers, and so peer review from other classmates may be more important than teacher feedback as far as advancing learning.
Jungck briefly described a number of BioQUEST projects. For example, one project sought to develop student facility with the idea of biological modeling with equations. For this purpose, the Biological ESTEEM project (ESTEEM stands for Excel Simulations and Tools for Exploratory Experiential Mathematics) seeks to provide students with a mathematical vocabulary for describing common modeling concepts (e.g., linear, exponential, and logistic growth1).
Another BioQUEST project (BEDROCK) focuses on bioinformatics. The BEDROCK project requires students to use a supercomputer tool
called Biology Workbench,2 which allows biologists to search many popular protein and nucleic acid sequence databases. Database searching is integrated with access to a wide variety of analysis and modeling tools. Students can align multiple sequences of a particular gene from different organisms onto one three-dimensional structure and see the evolutionary conservation involved; they can thus relate the comparative biology of sequences to structure, function, and phylogeny.
Yet another project is BIRDD (Beagle Investigations Return with Darwinian Data), whose goal is to provide a variety of resources related to evolutionary research. Labs are rare in courses dealing with evolution, largely because evolutionary phenomena involve temporal and geographic scales that make it difficult for instructors to develop labs comparable to those in biochemistry, physiology, or behavior. BIRDD addresses this problem by providing raw data (e.g., bird songs, sequence data, rainfall, breeding sites, and so on) and pedagogical ideas to help instructors structure appropriate pedagogical experiences for their students. BIRDD helps students generate questions and look at, for instance, whether character displacement happens when the species co-occur or when they inhabit different islands.
To illustrate the special relationship between biology on one hand and mathematics and computation on the other, Jungck noted 10 equations that have driven substantial amounts of biological research and for which numerous educational materials have been developed:3
1. Fisher’s fundamental theorem of natural selection,
2. Cormack-Hounsfield computer assisted tomography,
3. Genetic mapping (units = morgans; the Haldane function),
4. Fitch-Margoliash little maximum parsimony algorithm (Penny and Hendy—Molecular Phylogenetic Trees—Bioinformatics),
5. Lotka-Volterra interspecific competition logistic equations,
6. Hodgkin-Huxley equations for neural axon membrane potential,
7. Michaelis-Menten equation for enzyme kinetics (Jacob and Monod),
8. Allometry (e.g., MacArthur-Wilson species area law and conservation),
9. Hypothesis testing (e.g., Luria-Delbrück fluctuation test), and
10. Crick-Griffith-Orgel comma-free coding theory.
2 BEDROCK (Bioinformatics Education Dissemination: Reaching Out, Connecting, and Knitting-together), website, BioQUEST Curriculum Consortium, http://bioquest.org/bedrock/about.php. Last accessed February 7, 2011.
3 John Jungck, 1997, “Ten Equations That Changed Biology: Mathematics in Problem-Solving Biology Curricula,” Bioscience 23(1):11-36.
He also noted that the typical biology textbook contains only a handful of equations, and even those are linear equations, and expressed his surprise that biological visualizations, important as they are to the way biologists think about the world, are not accompanied by the tools needed to interpret different kinds of multivariate, multidimensional biological data.
Finally, Jungck discussed the Visible Human Explorer (VHE). According to the VHE website,4 the VHE is an experimental user interface for browsing the National Library of Medicine’s (NLM’s) Visible Human data set, which is based on two digitized cadavers in the National Institutes of Health Visible Human data set. The interface allows users to browse a miniature Visible Human volume, locate images of interest, and automatically retrieve desired full-resolution images from the NLM archive.
Jungck concluded by noting that computers and computation have transformed biology. He noted a quote from Michael Levitt (a structural biologist at Stanford) that “computers have changed biology forever, even if most biologists don’t yet realize it.” Educationally, he stressed the work of di Sessa, Parnafes, and others who emphasize the importance of engaging students in constructing, revising, inventing, inspecting, critiquing, and using rich visualizations for promoting conceptual understanding.
Idit Caperton described Globaloria as a platform, a transformative social media learning network, with a comprehensive hybrid course (online/in class) for playing and making games. It includes a customizable curriculum, community-developed resources, tools, tutorials, and expert support. Students and educators learn how to create their own web games, produce wikis, publish rich-media blogs, and openly share and exchange ideas, game code, questions, and progress using the latest learning methods and digital communication technology. Globaloria is a project-based learning environment for stimulating computational creativity as well as inventiveness in youth and educators as a necessary skill for the 21st century. Computational projects are built around a range of topics, such as health, climate, alternative energy, civics, mathematics, biology, social studies, and literature.
The World Wide Workshop’s innovative R&D and pedagogical approaches to platforms and tools for cultivating computational think-
4 Human Visible Explorer, website, Human Computer Interaction Lab, University of Maryland, http://www.cs.umd.edu/hcil/visible-human/vhe.shtml. Last accessed February 7, 2011.
ing and computational inventiveness have roots in Caperton’s MIT and Harvard research, and in educational theories about the value of project-based, multidisciplinary, innovative and creative learning (of any subject) through software design and programming.5
Caperton also described Globaloria as a customizable textbook comprising three main units. An introductory or “getting started” unit provides students with the opportunity to establish their own project spaces on the wiki network and to review existing games’ operation and their codes. A following unit is “game design,” in which students design an original game about a complex topic (in science, math, health, civics) and a social issue that matters to them. Students come up with an idea, assemble teams, do research, build and videotape their paper prototypes, and construct a concept and a demonstration that they present, both physically and online via web conferencing. Using Flash text and drawing and animation techniques, they program an interactive demonstration of their game concepts. A third unit is “game development,” in which students develop their game concepts and demonstrations into a complete, interactive game. Each unit contains a structured set of learning topics, as well as projects and assignments structured to help students create critical parts for their own original game.
Globaloria seeks to impart to students six contemporary learning abilities: the ability to imagine, design, prototype, and program an educational game, wiki, or sim; the ability to use project management skills in developing programmable wiki systems in a Web 2.0 environment; the ability to produce animated media, programming, publishing, and distributing interactive purposeful digital media in social networks; the ability to learn in a social constructionist manner and to participate actively in the public exchanges of ideas and artifacts; the ability to undertake information-based learning, search, and exploration as they relate to the abilities above; and the ability to surf websites and use web applications thoughtfully as they relate to the earlier abilities enumerated. Caperton argued that these abilities go beyond the typical media literacy skills, since they emphasize a bundle of complex and sophisticated constructionist digital literacies and involve longer-term engagement (students are required to use Globaloria daily, over two semesters, for a minimum of 100-150 hours6).
The Globaloria approach emphasizes constructionist collaboration
5 The canonical examples of such research are Idit Caperton, 1991, Children Designers, and Idit Caperton and Seymour Papert, 1991, Constructionism, both published by Ablex, Norwood, New Jersey
6 Caperton recommended repeating the use of Globaloria year after year for greater effects on computational thinking in learners.
within a transparent community. Participants in the community—teachers, students, staff, and game teams—maintain public blogs as design journals, share resources, and publish completed games on the community wiki. They can also submit created games for competitions or for publishing on the school’s Globaloria network.
Caperton suggested that it is possible to learn any subject and to master complex topics or social issues by creating functional, representational, educational multimodal computer games involving that subject’s content. She provided “10 design principles for implementation ‘The Globaloria Way.’” For example, developing educational games requires students to spend significant time, engaging daily on personally chosen projects involving open-ended and creative design tasks. A transparent and collaborative studio environment facilitates the sharing of work and provides many opportunities for social expression and discussion about game projects. Students thus learn through four modes simultaneously: (1) through design and teaching, (2) through peer-to-peer interactions, (3) through co-learning with teachers (and also from watching the teachers themselves learn), and (4) from online research and consultation with other experts (just-in-time learning) via pre-scheduled web conferencing and a help desk. (See Figure 2.3.)
The basic technology underlying the Globaloria platform is open-source MediaWiki with customized MediaWiki extensions, PHP, MySQL, Tumblr, Blogger and multiple Google tools. Students learn to program their games much like professionals in the real world using Adobe Flash Actionscript. The World Wide Workshop Foundation’s team (creators of Globaloria) chose Flash for students’ programming for a number of reasons, including:
• They themselves are expert developers in Flash;
• Flash provides a wide variety of tools, such as interfaces and video tutorials, to support users and thus can support a range of skill levels from novice to professional;
• Flash’s capability is present on many websites and in simulations and media devices;
• Flash is an industry professional standard in game development and multimedia programming, and so proficiency in Flash is likely to help provide students with internships and job opportunities in the future.
Finally, Caperton described research she and colleagues conducted on the impact of implementing models of Globaloria for fostering computational thinking and inventiveness among low-income rural students and low-income minority urban schoolchildren: (1) Model 1 in 45 schools throughout the public school system in 20 counties in the state of West
Virginia, where 1,300 students in rural middle schools, high schools, community colleges, and alternative education institutions participated with 55 educators in 2010 for credit and a grade; and (2) Model 2 within a charter middle school system in East Austin, Texas, where every student in that school took Globaloria once a day for 90 minutes for the entire school year. She provided an overview of selected research results7 and shared video case studies.8 Caperton argued that these were powerful demonstrations of plausible paths and activities for teaching computational thinking concepts to low-income rural and urban students of underserved communities.
• What are the relevant lessons learned and best practices for improving computational thinking in K-12 education?
• What are examples of computational thinking and how, if at all, does computational thinking vary by discipline at the K-12 level?
• What exposures and experiences contribute to developing computational thinking in the disciplines?
• How do computers and programming fit into computational thinking?
• What are plausible paths and activities for teaching the most important computational thinking concepts?
Robert Panoff, Shodor Education Foundation
Stephen Uzzo, New York Hall of Science
Jill Denner, Education, Training, Research Associates
Committee respondent: Yasmin Kafai
Robert M. Panoff, founder and executive director of the Shodor Education Foundation, is a proponent of teaching computational thinking through computational science. At the same time, he stresses the
8 For more information see www.worldwideworkshop.org/programs/globaloria/vftf. Last accessed February 7, 2011.
importance of certain metacognitive skills—in particular, being able to know that something learned (e.g., through computation) is right. Panoff described quantitative reasoning and multiscale modeling as components of computational thinking
Quantitative reasoning is not necessarily computer-related, but it is essential for anyone to make sense out of using a computer. An impediment to quantitative reasoning noted by Panoff is that many individuals have inconsistent and faulty intuition about quantity. For example, he pointed out that many people believe that two-fifths (2/5) is a small number, whereas 40 percent feels like a large number to them. He said that one metropolitan police department assigned more officers to patrols on Friday and Saturday night because a careful analysis of the data had shown that just under 30 percent of the car break-ins were on either a Friday or a Saturday night. But since 2/7 is 29 percent, the frequency of car break-ins was actually consistent across weekdays and weekends. (Panoff further noted that engaging in computational thinking is a partial remedy to misconceptions about quantity.)
Panoff described an exploration based on quantitative reasoning that addressed computational thinking and algorithmic thinking. Consider the number given by 355/113, and then explore the algebraic identity given by 355/113 – 101/113 – 101/113 – 101/113 – 52/113. In principle, this quantity should equal zero. But it does not when evaluated on a calculator. Panoff noted that most students realize that “something’s not right” when they are confronted with this “identity,” and he maintained that such a realization is the beginning of a serious exploration of how numbers are represented in a computer.
A second example involved calculators. A person who types the expression “3 + 2 × 6” into Google will obtain the answer 15, whereas the same expression typed into some calculators (such as the Accessories calculator of Windows) will yield the answer 30. Understanding why such a difference exists is challenging to some students. Another illustration is calculating the sum of A/B + A/C + A/D + A/E on a calculator. A student can perform each of these operations individually, or she can factor out A to obtain A × (1/B + 1/C + 1/D + 1/E). Again, these two sums are identical algebraically, but the algorithm (i.e., the specific steps to be taken in a particular sequence) is different and simpler in the second case than in the first.
Panoff’s third example requires understanding of orders of magnitude. He illustrated the point by asking what a student needs to know in order to answer the question “How much bigger is Earth than Pluto?” An obvious way to approach this problem is to perform Internet searches for the mass of Earth and the mass of Pluto. But an Internet search for the mass of Earth generates 20 or 30 different values, which have a spread of several percent. How does one know which value to use?
Here context matters—why is one asking the question about relative sizes? If the question relates to how big an object has to be in order to be a planet, then in the absence of a formal definition of planet, one only needs to know that the ratio MEarth/MPluto is on the order of a few hundred—and a difference of “several percent” is simply irrelevant to knowing which value of MEarth to use.
In Panoff’s fourth example, he pointed out that many people (in this case, medical residents) do not distinguish between “most of the time” and “more often than anything else.” For example, a physician may say to the patient that “most of the time, if kidney cancer comes back, it goes to the lungs first.” In fact, kidney cancer goes to the lungs 28 percent of the time, which is more often than anyplace else, but 72 percent of the time (i.e., most of the time), it goes somewhere else.
As for multiscale modeling, Panoff argued that technology enables one to re-present data and relationships (noting that one meaning of representation is to re-present). He illustrated by considering the Lennard-Jones potential function:
V(r) = k × ((S/r)1/12 – (S/r)1/6).
When r = S, the potential V is equal to zero, and so r = S defines the point at which the function crosses the horizontal axis. However, changing the value of the parameter S has two effects on the shape of the curve—the location of the crossing point and also the width of the potential well. The second effect is apparent most easily by graphing the function interactively, varying the value of S.
As for pedagogy, Panoff’s programs entail a learning progression of students running models at first, moving to modifying models, and then in a culminating step writing their own models. For example, a student might run a model and then manipulate the model’s parameters in order to explore what happens and to make conjectures about what would happen when a parameter is changed. Then she might modify it by moving a slider bar, or two or three slider bars. And then she might change the number of slider bars. Finally, she will write a model that calls for the use of slider bars to change parameters.
Pedagogically useful computational models are accurately implemented and provide appropriate data visualization tools. They are controlled by the student user and are honestly described (i.e., the description includes information about the flaws and limitations of the model), although other students and faculty and the scientific community at large collaborate with unit authors to develop the models. Last, they are coded with the goal that they can be extended by another party (students, in particular). The content provided in the models is based on common texts and national standards.
Stephen Uzzo from the New York Hall of Science (NYSCI)9 talked at the end of the workshop about the transformational effect that data-rich science has in computational thinking and about some ways to better prepare future scientists. He noted that data overload is a central theme of 21st century science. Data are accumulated in enormous quantities for biomedical, environmental, and social science applications, enabled by the rapid growth in computing power and sensing technologies.10 Highlighting this data overload, Uzzo pointed to various statistics such as that some science disciplines produce more than 40,000 papers a month, and computer users worldwide generate enough digital data every 15 minutes to fill the Library of Congress.11
In the face of such overload, Uzzo suggested, the traditional method of science—modeling natural phenomena and then validating those models against data gathered from nature—is inadequate. This traditional method assumes an environment in which data are relatively scarce, whereas much of science today is characterized by data in volume. A new approach, “e-science,” is needed in this environment.12 E-science focuses on managing, modeling, and making discoveries in massive amounts of captured data; seeking patterns; and identifying dynamics, influences, and complex and emergent behavior in whole systems.
Uzzo further argued that the computational thinking needed to engage in e-science includes a number of often-neglected concepts:
• Complexity. Practitioners need to know when the e-science paradigm for doing scientific research is (and is not) more appropriate than other paradigms of research (theory, experiment).
• Data visualization. Because of the large volumes of data involved in e-science research, visualization (and human interpretation of the resulting images) may be a more effective method for detecting and identi-
9 Hall of Science activities entail developing exhibitions and educational programs for STEM learning, and evaluating them for pedagogical efficacy in conveying the relevant concepts to the public and to K-12 students.
10 Input technologies such as efficient, small, and cheap sensors; automated logging systems; high-resolution remote sensing from satellites; robotics systems for DNA sequencing; protein mass spectrometry; and functional magnetic resonance imaging (fMRI) are just a few examples.
11 Manish Parashar, 2009, “Transformation of Science Through Cyberinfrastructure: Keynote Address,” presentation at Open Grid Forum, Banff, Alberta, Canada, October 14, 2009. Available at http://www.ogf.org/OGF27/materials/1816/parashar-summit-09.pdf. Last accessed February 7, 2011.
12 Tony Hey, Stewart Tansley, and Kristin Tolle, eds., 2009, The Fourth Paradigm: Data Intensive Scientific Discovery. Redmond, Wash.: Microsoft Research.
fying patterns than are traditional methods of reductive data analysis. Practitioners may need to develop new visual metaphors that are better for revealing patterns in complex data and techniques for displaying and comparing large amounts of data.
• Network science. In e-science, theoretical generalizations may be based on network science, the study of properties and behaviors of complex, dynamic systems of interaction. One often sees similar network functions and structures emerge across a variety of different problem domains.
• Data interoperability, data sharing, and other collaboration skills. Practitioners of e-science must understand many kinds of shared data types and the technical issues in data sharing and data interoperability that inevitably come up in collaborating with other practitioners and across divergent fields of study.
• Using semantics for creating more effective data structures. E-science places a premium on the ability to find general patterns in phenomena and then to identify similar instantiations in examining other phenomena. For such purposes, the use of Boolean logic for combining and parsing large amounts of data is insufficient. Searches based on Boolean logic are also ineffective with large amounts of data because both false positives and false negatives are problematic. Although search engines may work well for fact-finding, they do not serve well to identify patterns, trends, or outliers. Perhaps more importantly, the context in which a piece of knowledge was created or can be used may be missing, making intelligent data selection, prioritization, and quality judgments extremely difficult.
Semantic approaches are needed to deal with data at large scale. (Biomedicine is a canonical example of a domain in which this is true.) A semantic web uses triples instead of search terms. A triple consists of two ideas (the first two elements of the triple) that are linked through a term describing how the ideas are related (the third element).
E-science requires a cyberinfrastructure capable of processing data in prodigious quantity and of making large data sets available to researchers reliably and promptly. It must facilitate interoperability between applications used by researchers, and it must provide easy-to-use tools for processing, manipulating, and combining multiple data types. In discussion, Al Aho noted that “the software world of today is largely a Tower of Babel with lots of incompatible infrastructures and a lot of expense regarding who pays, who collects the data, who maintains the data, who maintains and evolves the software.”
To illustrate the tools necessary, Uzzo discussed the idea of a “macroscope” and an existing tool called the FreeSpace Manager. The macroscope is an expandable and integrated set of applications that scientists
can use to share scientific data sets and algorithms and to assemble them into workflows.13 Uzzo and NYSCI, in cooperation with the School of Library and Information Science at Indiana University, are developing systems that allow museum visitors to create, format, present, and mine data the way scientists do. The macroscope would help to identify patterns, trends, and outliers in multiple large-scale data sets, whether static or streaming.14 Such tools can continuously evolve as scientists add and upgrade existing plug-ins and remove obsolete ones—all with little or no help from computer scientists.
To support collaborative data sharing involving multiple data types and streaming, the University of Illinois at Chicago is developing the Scalable Adaptive Graphics Environment (SAGE), a central element of which is the FreeSpace Manager.15 SAGE is a physical room whose walls are made from seamless ultra-high-resolution displays fed by data streamed over ultra-high-speed networks from distantly located visualization and storage servers. SAGE allows local and distributed groups of researchers to work on large distributed heterogeneous data sets. (To illustrate, users could be simultaneously viewing high-resolution aerial or satellite imagery, as well as volumetric information on earthquakes and groundwater.) The FreeSpace Manager provides an easily understood and intuitive interface for moving and resizing graphics on the display, giving users the illusion that they are working on one continuous computer screen, even though each of their systems is physically separate. The FreeSpace Manager is similar to a traditional desktop manager in a windowing system, except that it can scale from a single tablet PC screen to a desktop spanning more than 100 million pixel displays.
Uzzo noted that he sees increasing demand for using these kinds of sophisticated tools in the Hall of Science, not only for accessing data sets virtually within the museum walls, but also for bringing such tools into remote K-12 science classrooms through NYSCI’s Virtual Visit teleconferencing program. In the past, NYSCI outreach efforts were based solely on synchronous interactions between a museum facilitator and classroom students. However, because of the complexity of the science NYSCI is
13 Joël De Rosnay, 1975, The Macroscope, New York: Harper & Row Publishers.
14 Katy Börner, 2011, “Plug-and-Play Macroscopes,” Communications of the ACM 54(3):60-69. Available at http://ivl.slis.indiana.edu/km/pub/2010-borner-macroscopes-cacm.pdf. Last accessed February 7, 2011.
15 Andrew Johnson, Jason Leigh, Luc Renambot, Arun Rao, Rajvikram Singh, Byungil Jeong, Naveen Krishnaprasad, Venkatram Vishwanath, Vaidya Chandrasekhar, Nicholas Schwarz, Allan Spale, Charles Zhang, and Gideon Goldman, 2004, “LambdaVision and SAGE—Harnessing 100 Megapixels,” presentation at the CSCW Workshop on Human Factors in Advanced Collaborative Environments, Chicago, November 6, 2004. Available at http://www.evl.uic.edu/aej/papers/CSCW-SAGE.pdf. Last accessed February 7, 2011.
teaching today, many data inputs are needed, and the remotely located students need to be able to share and interact with those data as well.
In general, Uzzo argued for teaching a new generation of science students these kinds of e-science data processing and interaction skills, thereby creating the demand side for the infrastructure that e-science will need to succeed. He further suggested that informal learning institutions may be in the best position to advance the cause of e-science because these institutions have an opportunity to move computational thinking beyond the traditional bounds of today’s computer science by helping to close the gap between science as a research activity and learning about science. These institutions are also in a good position to conduct learning research around this topic and then to integrate such research into professional development and curriculum development for K-12 formal education.
For Jill Denner, a developmental psychologist with Education, Training, Research (ETR) Associates, the programming of computer games provides an appropriate context for the development of computational thinking in middle school students.
Denner and her colleague Linda Werner, a computer scientist from the University of California, Santa Cruz, argue that the programming of computer games connects to computational thinking in several ways. One important connection is in the modeling of abstractions—in Denner’s words, “Youth are engaging in modeling abstraction while programming a game when they create a model of their make-believe world, which includes creating variables, new methods, and thinking at multiple levels of abstraction, such as how the player will interact with the game and what the goal of the game is.” A second important connection between programming of computer games and computational thinking is to algorithmic thinking. To make their games playable in the way they envision them, they must understand when and how to program using sequential, parallel, or conditional execution, and how to create a logical process through which a player can interact with the game.
In one pilot study involving 30 students using Storytelling Alice to develop 23 different games, Denner reported that students used specific programming constructs showing evidence of computational thinking (i.e., algorithmic thinking, abstraction and modeling) such as event handling, parallelism, additional methods, parameters, alternation, iteration, and conditional execution:
Many of the students created their own methods and used parameters which we see as examples of modeling and abstraction. In their final games there was limited use of alternation and limited use of iteration.
In part this is due to how we taught them when we bundled the if/else construct with another complex programming construct that made it more complicated for them to learn it. We feel they didn’t incorporate loops due to motivation; many of the students didn’t see the point of creating loops when they could just repeat code segments. They didn’t see the point of creating more efficiency.
Denner and Werner’s approach is based on students engaging with computer games along a use-modify-create continuum. First, they play other students’ games and work through three tutorials that teach programming with Alice. The goal of the “use” phase is for students to learn about the Alice interface and the kinds of games that they might make. Second, students learn to modify an existing game through a series of graduated self-paced challenges. The goal of the “modify” phase is to experiment with different strategies and their results, and to build an understanding of the mechanisms that they will use to program a game. Last, students create an original game de novo.
Denner reported on several lessons learned from the project:
• Individual differences matter a great deal. Denner pointed out that students have different starting levels, willingness to fail, and motivations. Some students prefer to learn by playing around, whereas others prefer to follow step-by-step instructions to carry out a task. Some students are afraid to fail and thus are unwilling to tackle problems that entail the risk of failure (e.g., using a concept incorrectly). Other students are intrepid explorers who are curious, creative, and undaunted if and when they fail at doing something. Those unwilling to explore a range of strategies are unlikely to get beyond modification of an existing program and will thus never create a truly original game. Denner found it necessary to balance student engagement on a problem with motivating them to learn more complex or difficult concepts needed for their programs. Specifically, she suggests that to promote computational thinking during computer game design, teachers must:
—Be strategic with examples: students use what they see.
—Provide graduated instructional materials that can accommodate a range of programming experience and styles.
—Balance structure with exploration. It is important to encourage authentic interest, but also to provide enough structure to encourage games that include computational thinking concepts.
• Students program differently in pairs than by themselves. Compared to students working individually, students in pairs spent more time doing programming and housekeeping tasks (e.g., saving and testing their code), whereas the students working by themselves spent more time doing things like screen layout, changing the appearance of the game, and
adding objects. For most students, pair programming is highly motivating and improves their ability to communicate concepts. When students have to work directly with a partner next to them on one computer, they have to explain their complex ideas simply so their partner understands. The quality of pair interaction determines the extent to which the students engage in computational thinking and persist in the face of challenges.
• The measurement of computational thinking requires multiple sources of information. Denner and Werner are analyzing several sources, including computer logging data that show what students are doing when programming a game. They also give students performance assessments to measure algorithmic thinking and abstraction, and code student games for frequency of aspects of computational thinking, as well as computational thinking patterns.
Most of this research has focused on groups that are underrepresented in computing—girls and Latinos. Denner reported that they faced a number of challenges in their middle school efforts to promote computational thinking among students in both high- and low-resourced schools. Challenges included mundane issues such as difficulties with hardware and software and with Internet access. Other challenges were how to create effective instructional materials to help teachers with little or no training in computational thinking support it among their students. Finally, they faced the challenge of motivating students to engage in sustained complex thinking in an after-school setting.
Lou Gross directs the National Institute for Mathematical and Biological Synthesis (NIMBioS), an organization supported by the National Science Foundation and by the Departments of Homeland Security and Agriculture.16 The primary goals of NIMBioS are to foster the maturation of cross-disciplinary approaches in mathematical biology and the development of interdisciplinary researchers who address fundamental and applied biological questions. NIMBioS has an education and outreach program that offers a variety of activities for K-12 students and teachers, university and college students and faculty, professional industry audiences, and the general public. These activities focus on education at the interface of mathematics and biology.
16 More information about and further description of the National Institute for Mathematical and Biological Synthesis (NIMBioS) can be found at “NIMBioS,” website, http://www.nimbios.org. Last accessed February 7, 2011.
Gross argued that key to computational thinking is tying a computational worldview to a student’s everyday experience. In his words, “Student comprehension of computation and appreciation for its importance in everyday experience would be enhanced at every level of the educational experience if we encourage connections between computation and the models (internal to their experience, as well as those used to understand scientific processes) students use, and the data they collect from their own observations of the world around them.”
To support this perspective, he offered several examples.
His first example involved an everyday problem—how to pick a checkout line at a grocery store. What variables might affect one’s decision? Workshop participants suggested line length, the presence or absence of a bagger or of someone writing a check, the number of items in a person’s cart, and whether the line is an express line. Gross pointed out that high school students often ask about the presence or absence of someone cute in the checkout line, thus illustrating the point that the criteria for decision making depend on the nature of the model involved and its purpose.
His second example involved a simple game about which students are asked to make a prediction. The game involves groups of three to five students with a cup containing one blue and one yellow bead, and a separate supply of blue and yellow beads. Students are asked to pull a bead from the cup at random and then to place it back into the cup along with another bead of the same color, and then to repeat this procedure until there are 20 beads in the cup. Students are asked to predict what will happen in their cup. (An important advantage of this example is that it is easy for students to collect data in a single class.)
A typical prediction is that the cup will become mostly blue or mostly yellow depending on the color of the bead first chosen. What happens in fact is that for any given cup, the percentage of blue beads converges to a fixed fraction, but the fraction is different for different cups. With an enormous number of cups, the result is that nearly every fraction from zero to one is represented, and in the limit of an infinite number of cups, the distribution from zero to one is uniform. As for the typical prediction, the actual result is that it is true that a blue bead being drawn first does result in a greater likelihood that the ultimate fraction of blue beads is high, but this is not true all of the time.
The cup scheme described is known as a Polya’s urn, and the resulting sequence of bead configurations is described by a Martingale process—the system has built-in dependencies and a feedback structure that is a very common property of biological systems at many levels. Gross uses this example to extract three primary conclusions:
• Randomness can lead to order. Although the underlying system is random (because the color of the beads drawn cannot be predicted exactly), each cup approaches having a fixed fraction of each bead, but the fraction is different in different cups.
• Randomness can lead to complexity. By combining outcomes from different types of beads, highly complex outcomes are possible. That is, with a large number of beads and many beads drawn, one can develop any number of stable outcomes.
• Because random processes can produce outcomes of such complexity, once natural or artificial selection is added, the result is a powerful mechanism to explain the complexity of the world around us. This is the basis of much of the explanatory power of modern biology.
A third example provided by Gross involved descriptive statistics using personal data. The question posed to students is what happens to one’s height overnight. Students make predictions about their change in height and consider factors that might affect height, such as their height when they go to bed, their gender, the amount of sleep they had last night, the amount of alcohol they consumed last night, and so on. They collect these data over four nights and then explore the data using basic descriptive statistics, bar charts, scatter plots, and regression analysis.
The resulting data set is easily understood, is multivariate, has potential multiple causal factors, and illustrates problems with sampling and outlier effects. For example, a height measurement might be recorded as 800 mm. Since everyone realizes that no one in the group is only 800 mm in height, the example illustrates the point that bad data are sometimes collected and must be discarded. Graphing the data in different ways to show possible relationships between different combinations of two variables provides insight into multivariate relationships, thus illustrating that simple descriptors (e.g., measures of central tendency and dispersion) may not be adequate to describe what’s going on.
The example can also motivate a discussion of the importance of institutional review boards in approving experiments dealing with human subjects.
The final example was intended to introduce students to mathematical notions of vectors, matrices, Markov chains, equilibrium, and stability. Gross’s example began with an aerial image of Washington, D.C., pulled from Google Earth. He asks students, “How would you describe this image?”
The image shows buildings, roads, trees, and other typical topographic features. But students eventually describe the image by saying how much of the image is this color or that color, how much is in buildings, how much is in roads, and so on. They basically then figure out that
that’s what a vector is, that they’re describing the image as a vector in which the components consist of the fraction of the image that is of each type. One interpretation of this vector is that it represents a probability distribution of the landscape for a discrete number of components. (Gross pointed out that the idea of using a vector to represent a multidimensional entity is independent of any discussion of probability.) Students also realize (with some coaching) that spatial aspects of the image are not included in the vector description. For example, the vector does not change if some of the buildings are moved around in the image. However, a vector description of an image taken in 1988 would differ from one of an image of the “same” scene in 1949. That is, a time-varying vector describing a scene is one way to characterize change in land use over time. Students are then able to derive the basics of matrix multiplication by finding the fraction of each landscape type that transitions to another type in each time period, the transition matrix of the Markov chain, and use this to determine the landscape vector at future times. After a long time, corresponding to many matrix multiplications, the landscape vector is near equilibrium, given by the eigenvector for the dominant eigenvalue (one) of the Markov chain.
With this background in hand, Gross uses with his students prepackaged software to demonstrate computational methods of looking at change across a landscape, e.g., coupling between an image, a dynamically changing vector, in this case a bar graph, and then an overall descriptor. An example of prepackaged software for this application is found in EcoBeaker, a set of computational laboratories useful for analyzing ecological data, and students also use code they develop in Matlab to analyze different types of landscapes.
• What are the relevant lessons learned and best practices for improving computational thinking in K-12 education?
• What are examples of computational thinking and how, if at all, does computational thinking vary by discipline at the K-12 level?
• What exposures and experiences contribute to developing computational thinking in the disciplines?
• How do computers and programming fit into computational thinking?
• What are plausible paths and activities for teaching the most important computational thinking concepts?
Christine Cunningham, Museum of Science, Engineering is Elementary Project
Taylor Martin, University of Texas at Austin
Ursula Wolz, College of New Jersey
Peter Henderson, Butler University
Committee respondent: Marcia Linn
Christine Cunningham from the Museum of Science in Boston spoke about its Engineering is Elementary (EiE) curriculum. The EiE project is developing an elementary school curriculum to help students learn about engineering. It integrates engineering with topics in elementary school science. EiE also conducts professional development of educators. The project has four goals:
• To increase children’s technological literacy. This is the primary driving idea underlying the project.
• To increase elementary educators’ ability to teach engineering and technology. The Museum of Science realized early that the first goal required teachers who understood something about engineering and technology, and that very few elementary school teachers have ever had any exposure to formal engineering.
• To increase the number of schools in the United States that include engineering at the elementary level. To introduce technology education into schools, it is necessary to convince schools and districts that there is actually room for it in the curriculum as they currently teach it.
• To conduct research and assessment to advance the first three goals and contribute knowledge about engineering teaching and learning at the elementary level. Having as well as presenting research and assessment data is necessary to persuade schools—and to the extent that such data can relate to other topics being taught, so much the better.
Cunningham went on to discuss the project’s lessons learned and the resulting best practices. The first principle was the importance of listening to teachers and involving them in every aspect of the development process. For example, because teachers are responsible for covering content that is largely prescribed by external influences, any new content (e.g., new disciplines or concepts) must integrate with or reinforce content or topics already being taught. Cunningham and her colleagues identified 20
topics that are commonly covered in elementary science programs, paired each with an engineering field, and illustrated the pairing with a particular technological device or process (Table 4.1). Cunningham stressed that understanding engineering habits of mind and mental processes is an important aspect of their work. They address this goal by developing specific curriculum units that focus on processes.
A second principle is to build on what teachers know or feel comfortable doing. It is well known that many elementary school teachers are
TABLE 4.1 Correspondences Between Elementary Science and Engineering
|Topic from Elementary Science Program||Engineering Specialty||Corresponding Technological Device or Process|
|Insects and plants||Agricultural||Pollinators|
|Wind and weather||Mechanical||Windmills|
|Simple machines||Industrial||Chip factory design|
|Balance and forces||Civil||Bridges|
|Solids and liquids||Chemical||Playdough process|
|Rocks and minerals||Materials||Replicate an artifact|
|Floating and sinking||Oceans||Submersible|
|Ecosystems||Environmental||Oil spill remediation|
|Human body||Biomedical||Knee brace|
uncomfortable with science, and Cunningham found that engineering (and presumably computational thinking) is even more terrifying. To address this issue, Cunningham and colleagues begin their presentations with exercises in literacy—an illustrated storybook for children. The story has significant engineering content but is presented as a reading exercise, so that at a minimum students will receive a very general introduction to the topic. This storybook also provides context for the engineering activities that the kids will be doing in class. Elementary teachers are generally quite comfortable teaching literacy, and so this is a gentle introduction to the new discipline of engineering.
Cunningham also noted the importance of articulating how new content and skills are responsive to existing educational standards, such as those from the International Technology and Engineering Educators Association (ITEEA) Standards for Technological Literacy, the National Science Education Standards from the National Academy of Sciences, and the math standards from the National Council of Teachers of Mathematics. Such standards could include, for example, core concepts of technology such as systems, processes, feedback, controls, and optimization; the design process as a purposeful method of planning practical solutions to problems; inclusion in the design process of such factors as the desired elements and features of a product or system or the limits that are placed on the design; and the need for troubleshooting.
From time to time, learning about engineering can be motivated in terms of meeting teacher goals that are not necessarily based in educational standards. For example, many elementary school teachers want to find ways to help their students work together in teams. Persuading students to work together, to play nicely, and to communicate what they’re doing is something that many teachers want to accomplish at the beginning of each year, and engineering education can often be an important part of such persuasion.
A related point is the importance of student evaluation. Both teachers and students pay much more attention to material when student understanding of such material will be evaluated. The evaluation process need not be a one-to-one correlation (i.e., we teach X and then students are evaluated on their knowledge of X), but if teaching X helps students to understand Y better, and Y is assessed, teachers will be more likely to continue teaching X.
Another lesson learned is the desirability of starting small. Teachers tend to be more willing to invest a couple of class periods to experiment with a new concept than an entire school semester or year. The success of one individual teacher with a particular concept or topic can catalyze others, as his or her students tell their friends about an interesting new experience in class. Other teachers hear of students’ positive reaction and
often want to try the concept or topic themselves. These efforts build grassroots support within the school for change.
Professional development and solid curricular materials are also important. Because elementary teachers are inexperienced with the subject matter of engineering, teaching materials have to be explicit and clear. For example, because learning objectives drive the specific experiences or exposures embedded in different curricular units, objectives need to be very explicit and specific—children will know X and be able to do Y—rather than high level and abstract. Learning objectives should also be few in number and relatively simple in scope so that a high degree of student mastery is possible. And the materials must provide ways of specifically assessing the scope and extent of student mastery and comprehension.
As for the pedagogical approach taken to the subject material, Cunningham and colleagues have found that hands-on experiences are particularly important for young learners. They have fielded many requests to replace physically manipulative experiences in handling objects with a click-and-drag interface on the computer that students can use to connect objects on the screen. But knowledge about the physical world that teachers take for granted cannot be assumed in students. For example, students don’t necessarily know that a fuzzy pompom will pick up pollen better than a smooth marble. In fact, that fact is engineering knowledge, and it’s “common sense” only if one has real-world experience with pompoms and marbles.
Also, context matters a great deal to students, especially to girls and underrepresented minorities, who often lack a cultural frame for why they should care about learning about engineering and what it might be used for. The project takes great care to ensure that its challenges are inviting to students who are often underrepresented in STEM (science, technology, engineering, and mathematics); one core way it does this is by illustrating how engineers (and engineering) might help people, societies, or animals. The storybooks are an essential element of context-setting, but it is important to contextualize the entire learning experience and not just the beginning.
Cunningham closed by pointing out that some of the lessons above for introducing engineering into elementary school also applied to middle schools and high schools. Specifically, she underscored the importance of integrating the new material—in this case, engineering—with other things that these schools are already teaching. After that integration is achieved, the new material may become more primary—but emphasizing its importance as a primary focus from the start is a strategy that is not likely to succeed in getting it introduced in the first place.
Taylor Martin of the University of Texas at Austin discussed several themes that she regarded as important for the teaching of computational thinking:
• Personal empowerment. In Martin’s view, the teaching of computational thinking to students should impart to them a personal sense that they can in fact undertake intellectual tasks that they initially feel they cannot perform successfully. Underlying this sense is the ability to break a complex task into constituent parts, each of which makes some progress toward the goal. She illustrated by pointing to Web searches that yield information on how to do or fix something and suggesting that a Web search may well be the most sensible “first step” in solving a complex problem. Empowerment also implies that the computational thinker has confidence in being able to “talk” with the computer and getting it to do what he or she wants it to do.
• Motivation and authenticity. Martin noted the importance of personally relevant tasks for motivating people to undertake the hard work of learning new ways of thinking and acquiring habits of mind. However, she also pointed to the idea that using computers can be fun and motivating in and of itself for many individuals. Many individuals will explore the ins and outs of a computational device just for the fun of discovering what it can do, and pedagogy should take advantage of this phenomenon. She further observed that exploration of such devices is greatly enhanced when they are ubiquitously present—when the devices are not present, she argued, students are not thinking computationally.
• Habits of mind. Martin argued that when someone becomes facile with computational thinking, the notion of computer-as-device disappears, and what remains are the worldview and habits of mind associated with computational thinking. She noted that an experienced computational thinker cannot resist thinking of ways to save effort when repeated actions are required to accomplish a task—they are driven to develop computationally informed approaches to solve these problems. For example, these individuals understand that in large stores, even if there are four cashiers, having one line makes more sense than having four lines.
Regarding the infrastructure needed for teaching computational thinking, Martin said that present trends point to the disappearance of the computer “as computer” in the future—the computer will become increasingly invisible. If so, teachers of computational thinking will have to find pedagogical approaches that do not necessarily depend on the computer per se.
Nevertheless, today computers are a valuable instructional tool when teachers are comfortable with them, when activities are student-centered, and when enough equipment is available. From her perspective, schools do have many computers—but these computers are not well matched to the pedagogical tasks at hand. Martin is an advocate of an infrastructure based on open-source software, and she believes that it makes sense to install such computers in classrooms and see what students and teachers do with them. She also offered an important qualifier—if the resulting computational problem-solving environment does not have the tools that students would choose to use, e.g., Facebook, Gmail, and so on, and the unavailability of familiar tools is likely to inhibit student learning.
With such an infrastructure, Martin’s goal is to make the underlying technology as transparent as possible to students, and thus “computational thinking” can be sneaked into student activities without intimidating them so that the computer is “a tool like a pencil, no big deal at all, an extension of your hands.”
Ursula Wolz described the use of journalism education and the language arts as vehicles for exploring computational thinking in a program at the Fisher Middle School in Ewing, New Jersey.17 Paraphrasing Gerald Sussman’s statements at the first NRC Workshop on Computational Thinking,18 she began her presentation by arguing that computational thinking requires first and foremost a language through which to express that thinking. Languages can be natural or formal. Language arts instruction focuses on the former, whereas mathematics instruction focuses on the latter. The emphasis on tying programming to mathematics instruction—adapting pedagogical strategies, curricular organization, and assessment methods from math—may lead math-averse students to believe that they can’t think computationally.
Wolz argued that demonstrating the relationship between computational thinking and language arts can facilitate integration of computational thinking into the mainstream curriculum. Essential to this enterprise is acknowledgment that computing must become as ubiquitous and integrated as the life sciences, starting with a computational analog to the butterfly chrysalis in preschool. But computing must be infused into all
18 NRC, 2010, Report of a Workshop on the Scope and Nature of Computational Thinking, Washington, D.C.: The National Academies Press. Available at http://www.nap.edu/catalog.php?record_id=12840. Last accessed February 7, 2011.
curricular areas so that it is not compartmentalized into a “special” activity such as art or gym. Its ubiquity should be through creative expression rather than curricular “chunking” of disassociated content. She noted the problem that adding computing to an overburdened curriculum requires taking something out (often the arts).
Teacher-initiated curriculum development confirms Wolz’s contention that learning programming and computational thinking must be contextualized. Scholars of computer science may study and examine formal languages in the abstract, leading to the traditional focus in programming courses on the constructs of a language (e.g., variables, expressions, loops, functions). The analog in modern language instruction would be to require elementary students to diagram sentences and master English grammar before being allowed to read literature or write stories of their own. Integrating computational thinking into the language arts curriculum affords students a natural arena in which to practice reading and writing in a formal language (e.g., Scratch) in a meaningful and motivating context.
Wolz argued that language arts programs are inherently flexible, thus inviting innovation. Journalism provides an ideal venue for civic engagement and what Seymour Papert called “serious fun.”19 Language arts is secure in K-12 curricula, and so hitching the computational thinking wagon to language arts helps to ensure that there will be a place for it in an increasingly packed curriculum. Further, 21st century literacy will require facility with as yet unimagined modes of expression that involve computational thinking.
The extracurricular program her project developed reinforces language arts skills and computational thinking by providing a collaborative model for a 21st century newsroom. Teacher-editors and student reporters are assigned a “beat” (e.g., politics, sports, business), research and develop a story, and then create text, graphics, video, and procedural animations in Scratch to post newsworthy stories on the Internet.
Journalism provides an operational context—that of principled storytelling and information dissemination in which students, as constructors of aggregated content (rather than just consumers), must inquire, create, build, invent, polish, and publish. All of these same notions arise in computational thinking as well. Wolz’s students iterate on defining the problem, researching it, drafting a solution, and testing it (in the language of journalism, they research, interview, draft, copy edit, and fact check). In the end, they publish.
For Wolz, the correspondence between journalism and computational thinking is deeper than mere process. Both are concerned with information access, aggregation, and synthesis (e.g., fact gathering, analysis). In both domains, there are huge concerns about reliability, privacy, and accuracy. Algorithm design is the computational thinking analog for the need for logical consistency in journalism. (Increasingly news articles have a quantitative aspect to them and are expected to contain logically consistent arguments based on reliable data.) Knowledge representation and the appropriate granularity are important as well. Last, she noted the importance of abstraction from cases in both domains.
The middle school teachers with whom Wolz collaborates took the initiative to bring the Interactive Journalism Institute into their classrooms. They view programming in Scratch as a mode of expression through which students practice language arts skills. The teachers integrate computational thinking concepts via Scratch projects and kinesthetic exposure to algorithms (à la CS Unplugged20) into curricula ranging from formal report writing to poetry. In technology and math classes, Scratch storytelling projects are used as rewards for completing assignments. Because of the gentle process through which Scratch programming and computational thinking are infused, the math-averse are not dissuaded, and those who want to improve their writing skills are encouraged. Crucial to this approach is minimizing informal coaching and emphasizing student-initiated project selection over didactic instruction in computational thinking. Enthusiasm for this approach is measured in the cultural diffusion throughout the school. Although only 6 teachers and approximately 50 students participated in the extracurricular program, 12 teachers included computational thinking in classes that reach approximately half of the school population of 900.21
Wolz believes her program demonstrates that reading teachers also have the capacity to teach elements of computer science effectively with the right support and tools. Three open questions remain: (1) How can computational thinking skills be assessed within this context? (2) How can the impact of computational thinking on language arts skills be assessed? and (3) How can this pedagogy be applied to other primary disciplines such as social studies, science, and math?
21 Ursula Wolz, Kim Pearson, Monisha Pulimood, Meredith Stone, and Mary Switzer, 2011, “Computational Thinking and Expository Writing in the Middle School: A Novel Approach to Broadening Participation in Computing,” ACM Transactions on Computing Education, forthcoming.
Workshop participants extended the discussion started at the first workshop concerning the nature of computational thinking and computational thinkers. In one perspective, Peter Henderson, formerly chair of the Department of Computer Science and Software Engineering at Butler University, framed his comments about computational thinking against the background of negative perceptions about computing and computer science. He pointed out the poor quality of software, noting that on a day-to-day level, we have all experienced the frustrations and difficulties of dealing with software artifacts that sometimes work and sometimes don’t, unexplained and unexpected system crashes, and so on.
He also noted that much of the news regarding careers in the information technology field is negative. Many parents remember the dot-com boom and bust, and even today, the news is filled with stories about greater outsourcing and offshoring of information technology employment. Rapid changes in technology make it difficult for people working with information technology to keep up their skills. Majoring in information technology is confusing, with all of the options in information systems, management information systems, computer science, informatics, computer engineering, software engineering, and so on.
Finally, Henderson argued that computer science is misunderstood by nearly everyone. He pointed to the NRC report on the first workshop on computational thinking and the lack of consensus on what it is, and suggested that it is no different asking any computer scientist—ask 10 different computer scientists what computer science is and you’ll get multiple different answers. So, he asked, “How can we advance the cause of a discipline that we don’t understand?”
As a unifying theme, Henderson described computational thinking as generalized problem solving with constraints. He argued that almost every problem-solving activity involves computation of some kind. Citing a Fred Brooks article, “The Computer Scientist as Toolsmith II,”22 Henderson said that for him, the toolsmith metaphor is a convenient umbrella under which the elements of computer science can be combined and then presented to the public in a manageable way. Computational thinking serves much the same purpose.
Henderson noted that humans learn through the use of concrete examples and through pattern recognition. Specifically, he argued that human understanding begins with concrete examples. Humans then identify patterns common to those examples (i.e., they generalize), they
22 Frederick Brooks, 1996, “The Computer Scientist as Toolsmith II,” Communications of the ACM 39(3):61-68.
specify those patterns clearly, they verify that those patterns are indeed valid, and then they proceed to name the patterns.
Henderson presented two examples. He described a problem presented on a TV series for preschool students entitled Thomas the Tank Engine. In one situation, Thomas is pulling two cars, one red and one green. They are on a track with a siding (connected on both sides), and the problem is to reverse the order of the two cars. Solving this problem requires the students to develop a computational algorithm.
His second example began with a set of number series: 0, 1; 0, 1, 1; 0, 1, 1, 2; 0, 1, 1, 2, 3; and 0, 1, 1, 2, 3, 5. The pattern is that the next number in the sequence is the sum of the previous two numbers; that is, Ni = Ni-1 + Ni–2 with N0 = 0 and N1 = 1. And then we name the sequence—Fibonacci numbers.
Henderson went on to describe computational thinking as generalized problem solving with constraints. That is, almost every problem-solving activity involves computation of some kind. He further noted that discrete mathematics and logic are rich sources of examples and material for computational thinking, and thus that discrete mathematics and logic are the foundational mathematics for computational thinking, useful for reasoning about computational processes.
Thus, Henderson would start with computational thinking activities in pre-K (e.g., reversing the two cars), although for the first several years, the term “computational thinking” would not be introduced explicitly. Only later would the notion of computational thinking be explored as such; this philosophy is consistent with Bertrand Meyer’s “successive opening of black boxes” view of learning object-oriented programming.23 For this preparation, traditional mathematics, discrete mathematics, and logical reasoning must be taught at all levels, and so, for example, an advanced placement course in discrete mathematics would replace the current AP course in computer science. A freshman discrete mathematics sequence would be introduced, similar to that currently present for calculus. This view is consistent with the traditional engineering educational model, which emphasizes the science and math foundations of the discipline early (e.g., physics, chemistry, calculus).
23 Bertrand Meyer, 1993, “Towards an Object-Oriented Curriculum,” Journal of Object-Oriented Programming 6(2):76-81.
• What is the role of computational thinking in formal and informal educational contexts of K-12 education?
• What are some innovative environments for teaching computational thinking?
• Is there a progression of computational thinking concepts in K-12 education? What are criteria by which to order such a progression? What is the appropriate progression?
• What are plausible paths to teaching the most important computational thinking concepts?
• How do cognitive learning theory and education theory guide the design of instruction intended to foster computational thinking?
Deanna Kuhn, Columbia University
Matthew Stone, Rutgers University
Jim Slotta, University of Toronto
Joyce Malyn-Smith, Education Development Center, Inc.
Committee respondent: Al Aho
Deanna Kuhn, a developmental psychologist at Columbia University, sees young learners as evolving through a number of different intellectual stages regarding scientific thinking, and she discussed how these individuals use data and evidence and how such use is relevant to their facility with scientific thinking.
The first level of development involves accepting the possibility of false belief, a level at which children come to understand that knowledge is constructed by human minds and therefore could be false. Often, children are able to realize the potential for false belief by distinguishing data and evidence from theories and claims. This requires a child to conceive of data as possibly not representing the complete reality—if the child sees data as representing a complete reality, there is nothing to distinguish the data from the claim and thus no task of intellectual coordination.
To illustrate this point, Kuhn described several data-gathering and reasoning exercises for kindergarteners. According to Kuhn, kindergarteners can do simple data representation and analysis. For example, when a class of kindergartners is asked the question, What is your favorite TV show?, students are able to understand the process of collecting the data
and developing a bar chart based on the answers. They can also make simple inferences such as “more students like show X than show Y.” They are also very good at recognizing simple covariation in causal models, that is, “Did A cause O?” in simple cases.
On the other hand, they have difficulty with covariation in a multivariate context (e.g., Which A caused O?) or a negative antecedent and outcome (e.g., Not A and O). For example, when children were given information about a simple pattern of eating green food or red food and then were asked if there was a relationship between the food eaten and healthy teeth, students could see and recognize the pattern when there was one (find covariation) but could not recognize when there was no pattern (non-covariation). Instead they claimed that there was a pattern. Kuhn added that young children also have trouble making the connection between concepts like non-covariation and non-causality. Kuhn says the students were doing isolated data interpretation of isolated instances and not looking for big patterns.
Kuhn also pointed out that although young children can do fundamental experimental design,24 they often close their inquiry prematurely. For example, if young learners are asked, “If we want to find out if the mouse that’s eating my cheese is big or small, which trap should we use?” and then are offered the options of a small-door trap or a big-door trap, the students can often understand that the small trap is going to be informative whereas the large door is not. But, if they are asked, “If we use a big door on the mouse trap, can we say whether the mouse is big or small?,” they tend to say, “Yes, that means the mouse must be big.”
Premature closure also sometimes occurs when children are presented with confirming evidence. Children often stop the inquiry at this point, not realizing that the inquiry remains unfinished and that confirming evidence is not sufficient to rule out competing hypotheses.25 Mitch Resnick has made a similar argument that not just kids but also adults are one-cause thinkers—that even adults identify one cause (of potentially many) and assume a partial inquiry is completely explicative.
Young children can eliminate variables based on simple data patterns. For example, Kuhn noted the work of Alison Gopnik, which suggests that children as young as 3 or 4, when presented with an experimental setting in which monkeys sneeze depending on the presence or absence of different types of flowers, can see the covariation between the blue flowers and
24 Beate Sodian, Deborah Zaitchik, and Susan Carey, 1991, “Young Children’s Differentiation of Hypothetical Beliefs from Evidence,” Child Development 62(4):753-766.
25 Anne L. Fay and David Klahr, 1996, “Knowing About Guessing and Guessing About Knowing: Preschoolers’ Understanding of Indeterminacy,” Child Development 67:689-716. See works of Klahr and Fay, among others, on problems in premature closure or dealing with indeterminacy in young children.
the sneezing.26 They can also recognize that the red flower or the yellow flower made no difference. In other words, they can eliminate variables.
Despite the fact that even young children can successfully employ some of the intellectual skills of scientific thinking, they can have a hard time articulating how they know something. In particular, they do not understand the epistemological difference between claim and evidence.27 Although a claim and data may tell a consistent story, they are not the same thing. For example, while looking at a photo of a boy standing on an award podium with a sign labeled with the number “1” and holding a trophy, a child is asked, “How do you know that this boy won the race?” A child will often answer not with evidence of how he knows (e.g., “He is holding a trophy” or “The podium has a number 1 on it”) but with a theory of why the outcome makes sense (e.g., “His sneakers were fast”).
Even children up to 12 years old tend to focus on evidence and data fragments that support their story, while ignoring or minimizing those that do not. For example, in explaining what causes an avalanche, a student may report that in case A, it was the slope angle that caused an avalanche. Yet the same student will claim that in case B, the slope angle did not make a difference because the slope angle was small and something else caused the avalanche—that is, these older students are having trouble with distinguishing between a variable and a variable’s magnitude. The educational challenge at this level is to help the child see the data as evidence rather than an example of a favored claim. Kuhn argued that when a child exercises control, a sort of meta-awareness over this sorting and attribution process, true scientific thinking can begin.
Finally, Kuhn argued that students’ facility with experimental design or controlled comparison develops with engagement and practice, even in the absence of direct instruction (although the environment itself must afford opportunities for practice). That is, such skills are naturally useful in an appropriately rich environment, and students begin to recognize and apply these distinctions naturally even without a lot of explicit instruction.
Matthew Stone is a computational linguist at the Rutgers University’s Department of Computer Science and Center for Cognitive Science. Stone works at the undergraduate level developing light programming courses
26 Alison Gopnik and Laura Schulz, 2004, “Mechanisms of Theory Formation in Young Children,” Trends in Cognitive Sciences 8(8):1364-1366.
27 Deanna Kuhn and Susan Pearsall, 2000, “Developmental Origins of Scientific Thinking,” Journal of Cognition and Development 1:113-129.
geared to non-computer science majors, particularly in the humanities and social sciences, and to those who tried to avoid math in high school. His efforts explore alternatives to programming-focused curricula to teach these students computational thinking concepts.
Stone emphasized the importance of three key ideas in teaching computational thinking:
• The universality of computing devices. Universality explains why it can ultimately be easier to design a machine that does many things (or everything) rather than one that just does one particular thing. Universality also underlies important concepts in computer science such as Turing’s theorem, as well as why and how programming languages are useful. In a class oriented toward non-computer science majors, it is impractical and overwhelming to discuss building digital logic, but Jacquard looms to control the weaving of patterns are well within their reach.
• Algorithmic approaches to problem solving. The notion of an algorithm as a deterministic way of problem solving is of course important, but an algorithmic approach to problem solving calls for fitting different algorithms together in an overall solution in a way that is worth the effort of doing so. This approach to problem solving explains why people will pay for programs and pay for programmers to write them. It provides historical context for the origins and evolution in society of computing, sorting, and tabulating. It inspires the right kind of people to think about algorithms for everyday problems, such as choosing which checkout line to go to in the grocery store. Stone reported that non-computer science students found it relatively easy to understand Radix sorts and binary searches. Such examples can be used as a basis for motivating a solution to a problem such as searching through a billion randomized slips to find 60 specific items.
• The importance of representations as correspondences between symbols and the physical world. With symbols, one can build mechanistic operations to track truth in the physical world, so that mechanical operations have broader meaning. This point explains why computers can be used for entertainment, music, and video; why biological systems are thought of as carrying information; and why computers appeal to cognitive scientists as a model of the mind. It is likely too difficult to delve in depth into representation at the level of understanding the meaningfulness of symbols in John Searle’s Chinese room,28 but there are many examples of digital representation in everyday life that connect with issues of general-
28 John Searle, 1980, “Minds, Brains and Programs,” Behavioral and Brain Sciences 3(3):417-457.
ity and algorithms in an information economy. Virtually everyone knows Facebook and Google, and they all know about online banking.
Stone argued that these three ideas are at the core of computational thinking, and they are useful for people who are not programmers but rather engage in work that does not require a mathematical or design background. By exploring these ideas in an elementary non-technical context, Stone felt that he was laying the foundation for allowing an initial understanding to grow into a fuller one.
Jim Slotta’s presentation addressed technology environments for K-12 classrooms and how these environments could support different pedagogical models.
Slotta first discussed the Web-based Inquiry Science Environment (WISE), which is intended to provide support (i.e., scaffolding) for inquiry activities in science classrooms. Students in these classrooms work collaboratively on projects that range in duration from 2 days to 4 weeks. A typical WISE project might engage students in designing solutions to problems (e.g., design a desert house that stays warm at night and cool during the day), debating contemporary science controversies (e.g., the causes of declining amphibian populations), or critiquing scientific claims found on websites (e.g., arguments for life on Mars).29
Tools and interactive materials provided in the WISE environment support collaborative activities. These tools include “inquiry maps” that provide the student with options for what to do next (e.g., to display a Web page that can be used in support of student designs or debates; to view a WISE notes window or a whiteboard, an online discussion, or journals; or to run an applet for data visualization, a simulation, or a causal map). The WISE environment also includes cognitive guidance to promote reflection and critique. WISE provides embedded assessments of student conceptual understanding of the inquiry processes they use, and it support teachers in adopting pedagogical practices that facilitate inquiry approaches to science education.
Slotta described WISE as a largely successful educational innovation for inquiry that was adopted by tens of thousands of students and teachers. In addition, it enabled research on pedagogical models and patterns,
29 Jim Slotta, 2004,The Web-based Inquiry Science Environment (WISE): Scaffolding Knowledge Integration in the Science Classroom, University of California, Berkeley, available at http://tccl.rit.albany.edu/knilt/images/3/36/Slotta_WISE.pdf. Last accessed February 7, 2011.
pioneered authorware and portal technologies, created an “open” curriculum library, and fostered disciplinary partnerships with NASA, the American Physical Society, and the Concord Consortium. Nevertheless, it had a number of significant limitations:
• Content was not portable across platforms. Although the curriculum library was open in the sense of being accessible to anyone, anyone who wanted to use elements of it was required to access the content in WISE. Thus, users found it very difficult to make changes to any curricular element.
• Individual students were unable to interact with their peers in real time in WISE, which ran within the browser but was unable to interact with other applications that were running on the machine.
• Implementing contingent behavior in a curricular element was made unnecessarily difficult by the technology. It is often desirable for an educational application to support execution paths that differ depending on what a student does, but WISE made it hard to design applications to do so.
To address some of these limitations, Slotta and his team engaged with the computer science department to develop a new open-source architecture called SAIL (Scalable Architecture for Interactive Learning) for content display and manipulation that separated the various layers of the learning environment (and in particular separated the content and the user interface) wherever possible.
Using this architecture, WISE was reimplemented. Renamed WISE 3, the reimplementation provided all of the functionality of the original environment but also supported easy interactions with other software, such as the Concord Consortium’s Molecular Workbench. On the other hand, it was implemented in Java, which the developers found too limiting from a performance standpoint. The next version, WISE 4, runs on the Web. It retains most SAIL elements, such as portals for managing user groups and the XML structures, as well as some of the metadata, and adds a new presentation layer.
SAIL has been used in a number of other science education efforts as well. For example, SAIL is an integral element of the Science Created by You (SCY) project of the European Union.30 SCY is a large project that provides a flexible, open-ended learning environment for adolescents. Within this environment—called SCY Lab—students engage in personally meaningful learning activities that can be completed through constructive
and productive learning. Examples of learning activities include browsing for information, generating a hypothesis, and distributing tasks.
Central to the SCY environment is an emerging learning object (ELO), which is essentially any student-created content that contributes to the learning process. Such content would include notes, reports, simulations, graphs, concept maps, research questions, data, and so on. ELOs are intended to represent the knowledge of a student as he or she learns.
Learning activities are themselves clustered in learning activity spaces (LASs). For example, a LAS titled “Experiment” clusters activities such as “design an experimental procedure,” “run experiment,” and “interpret data.” A LAS also indicates the relationships between learning activities and ELOs.
SCY provides a variety of modeling and simulation tools that support collaborative learning activities. Tools help students producing ELOs and thus determine the type and format of the ELOs, although the student adds the content on his or her own.
Slotta also noted that although the notion of establishing a knowledge community is not new, it has been difficult to implement. The basic idea of a knowledge community is that of students working collectively to aggregate and edit materials in ways that they drive their own learning. It is more open-ended than a traditional classroom, and the community emphasizes patterns of discourse and distributed rather than centralized expertise. But in a curricular environment such as secondary school science (chemistry, physics, for example) that requires coverage of a particular body of subject matter, orchestrating the proceedings in a knowledge community and connecting them to specific learning objectives presents extraordinary challenges.
This complex orchestration of people, materials, resources, groups, conditions, and so on requires a sophisticated technology framework to support it. Slotta developed such a framework, called SAIL SmartSpace (S3).31 This framework can be regarded as a “smart classroom” infrastructure that facilitates cooperative learning in a milieu of physical and semantic spaces.
From a technical standpoint, S3 supports aggregating, filtering, and representing information on various devices and displays (e.g., handheld devices, laptop computers); locational dependencies (i.e., allowing different things to happen depending on the physical location of a student); interactive learning objects; and an intelligent agent framework. The S3
31 More discussion of S3 can be found in Jim Slotta, undated, A Technology Framework for Smart Classrooms: Enabling Complex Pedagogical Scripts, Ontario Institute for Studies in Education, University of Toronto, available at www.stellarnet.eu/index.php/download_file/-/view/558. Last accessed February 7, 2011.
environment is highly customizable and supports the coordination of people, activities, and materials with real-time sensitivity to inputs from students.
As an example of the learning space that S3 can support, Slotta described an S3 environment tailored to a mathematics unit intended to help students understand the relationship between different aspects of mathematics—content categories such as functions, relations, graphing, algebra, and trigonometry. In this implementation, Slotta and his colleagues developed a touch wall at the front of the room where students could interact with these materials. Students worked in groups at their local machines and then the aggregate of their local work appeared both on their local machines and on the touch wall where students could walk up and take turns touching and exploring the space. This environment gave students the ability to manipulate content from different categories and to see relationships between them.
During discussion after his presentation, Slotta noted that one of the advantages of these technology-rich learning environments is that they reduce the intellectual need to consider the technology directly. That is, these environments focus student attention on inquiry, reflection, and collaboration around subject-matter content, rather than on how to interact with the technologies per se—the technology thus becomes more transparent and more invisible to the student.
Joyce Malyn-Smith from the Education Development Center, Inc., began by noting the importance of designing and managing both school-based and informal learning environments. For learning to occur, she maintained, it is necessary to invite youth into our learning environments and to create a learning exchange.
She suggested that educating K-12 students is different from educating college students. The former have minimal career direction and few internally determined learning goals and objectives. Furthermore, college/university education may be about teaching, but in K-12 environments, successful education is about learning—and in particular, about understanding the learning needs of each and every child in a classroom.
Facilitating learning among today’s digital natives is challenging. Malyn-Smith argued that these individuals use their ubiquitous information technologies to learn anything they want to know at any time and anywhere and to any depth and sophistication that their interests and needs take them. Furthermore, their technologies are individualized and mobile, and so they live continuously inside their own portable learn-
ing environments. Teachers can exploit the pedagogical opportunities thereby offered by creating a learning exchange during which students and teachers share what they know with each other, especially in non-school contexts.
Citing a white paper titled “Computational Thinking for Youth,”32 Malyn-Smith said that today’s youth often have a substantial familiarity with technology tools and deep understanding of technology concepts that can be foundational to developing their ability to think and solve problems. In addition, one of the main conclusions of that paper is that learners of computational thinking need opportunities for thoughtful, reflective engagement with the phenomena represented.
For example, although nearly every middle school student learns from the textbook that trees help mitigate pollution, students in an after-school program can have a chance to go further, using modeling tools to map the trees in their school yard and record relevant data on species, health, growing conditions, and the like. With this abstraction of their school yard created in the form of maps and data tables, they can use automated models to calculate the benefits of the trees in terms of pollution removal and runoff mitigation. They can also model alternative growth scenarios as they either “plant” new trees, let the existing trees continue to grow, or remove the trees for expanded parking. Re-running the model leverages the power of automation to quickly adjust the underlying parameters and enable seeing what the impacts are. But this iterative process just doesn’t fit in a school curriculum packed with hundreds of discrete topics that are connected loosely at best. Time allocations that allow for depth and complexity are part of the culture change needed for computational thinking to take root.
Other advantages of non-school environments include curricular flexibility, staff capacity, and access to infrastructure and to programs, especially in rural areas.
Interrelated challenges have constrained many previous educational innovations, and computational thinking is no different, in Malyn-Smith’s view. Addressing any one of these by itself will mitigate limitations, but a successful implementation will require addressing them all.
According to Malyn-Smith, one important consequence of this rich milieu is that today’s youth are evolving their own definition for computational thinking through experience. These individuals may not know what to call it, or associate all of the technical terms—but they do know that they are engaged in a way of thinking that is different from that of
32 ITEST Small Group on Computational Thinking, 2010, “Computational Thinking for Youth,” Newton, Mass.: Education Development Center, available at http://itestlrc.edc.org/resources/computational-thinking-youth-white-paper. Last accessed February 7, 2011.
people who are not intensive users of technology, and they are applying this way of thinking to the world around them.
Two key research questions arise from this sort of student engagement with computational thinking. First, to what degree and in what ways does the technology expertise of youth contribute to their computational thinking? A related second question is, How and to what degree can the use of technological tools and systems and processes facilitate transfer of learning in STEM careers and sciences?
To understand better what skills and knowledge youth bring to the classroom experience, Malyn-Smith encourages teachers to think broadly about the knowledge base that students are developing in all of their activities, not just those provided in program settings. Although teachers need to know a student’s standing relative to the curriculum being taught each year in schools, teachers also should engage in conversations with students about their interests and what they are learning in other settings, such as in museums, through television and radio, by playing games, and through what they’re doing with their friends.
Teachers thus play a crucial role in helping students validate what they learn both in and out of school and connect their learning to the standards and benchmarks that define achievement in today’s society. Teachers also play a critical role in providing context so that students see the importance of what they learn and how to connect what they know and can do to the skills and knowledge that are valued in society.
As for content, Malyn-Smith argued the need for clarity regarding what computational thinking is about. In the absence of such clarity, “it will be impossible to get any consistency in schools because people won’t understand what the topic is about, or people will interpret its definition as seen through only their individual lens.” She added that effective nationwide teaching of computational thinking requires a strategic approach based on clear definitions and illustrations rather than a scattershot set of examples.
Malyn-Smith recognized the difficulties in achieving clarity when multiple parties have different views of the essential content. To address these difficulties, she thought that the computational thinking community would benefit from a consensus process to explicate what she called a learning occupation. A learning occupation does not correspond to a specific occupational title or description, but it represents instead the combination of the shared work tasks, knowledge, skills, and attributes required to perform a range of job functions across a group of related real-life occupations. In practice, it symbolizes a goal for education and training designed for workers who would be able to perform a broad variety of work tasks suitable to a large cluster of occupations.
To develop this learning occupation around computational thinking,
she called for a process that would hold job-analysis workshops, validate the information, develop performance criteria and assessment guidelines, develop notional skills standards, and then validate them in an integrated skill standards model. Such a standards model would include content, assessment criteria, and measures of what people need to know and do to qualify for beginning-level employment. The model also contains an illustrative scenario of a routine work situation and a likely anticipated problem or breakdown.
Malyn-Smith contended that for computational thinking to get traction in the K-12 education community, it needs to be connected to frameworks and standards that are already implemented nationwide. An analysis of the Information Technology Career Cluster Initiative’s model, for example, provides a way to organize a hierarchy of skills and knowledge that can be repurposed to support the integration of computational thinking in the K-12 arena. At the most basic level, this information technology skills framework calls for literacy and the ability to use common technology applications. Further up the hierarchy is fluency with information technology, which involves core knowledge and skill sets of technology-enabled workers employed in any industry sector. At the highest level of this model are the skill sets necessary for IT producer or developer careers—those that involve the design, development, support, and management of hardware, software, multimedia, systems integration, and services.
Malyn-Smith proposed that this hierarchy could be adapted for an appropriate learning progression in computational thinking. She suggested that grades K-4 might be devoted to computational thinking literacy, career awareness, and computational skills for learning. Grades 5-8 would also focus on computational thinking literacy but would fold in career exploration and learning about computational thinking skills for various STEM careers. Grades 9-10 would address computational thinking for all careers—students could explore and experiment with computational thinking in a variety of different contexts. Grades 11-12 would focus on providing pathways to college and careers, especially those for which competence in computational thinking (and computer science and engineering) will confer significant advantages. Postsecondary education and training would separate into two tracks—specialized computational thinking skills and competencies that are useful for STEM professions, and the use of basic computational thinking applications and tools for professions in all career tracks and in all industries.
Finally, Malyn-Smith noted that the Department of Education has developed a number of career clusters organized around a similar framework. Each cluster model includes a core skill set called “IT applications” to which computational thinking concepts and ideas can be attached.
Understanding this organizing framework helps teachers visualize the skills progression that leads learners from technology literacy to technology careers. Once computational thinking is better defined and examples are developed to illustrate what it looks like in practice, a similar model might be developed to help educators and other stakeholders visualize the K-adult skills progression of computational thinking from computational thinking literacy to STEM careers. Because our national community of educators already recognizes this framework, they will be more inclined to accept and integrate computational thinking into their programs and curricula. The tools and resources developed to facilitate other programs using similar models can then be adapted to support the integration of computational thinking nationwide.
Jan Cuny of the NSF’s Broadening Participation in Computing Initiative discussed the CS 10K project, whose goals are to develop a new high school curriculum in computing and then to insert this revised curriculum so that it is taught in 10,000 schools by 10,000 well-prepared teachers by 2015.
The project focuses primarily on high schools because high schools have very little computer science education today. Cuny noted that without the high school piece, anything done at middle school will be lost and anything done at the college level will be insufficient. She further pointed out that of the few high schools that do offer “computer science” education, most do so by focusing on the vocational track and skills like keyboarding rather than deep computer science abstractions and so on. Last, she argued that the number of students who initially pursue computer science majors in college usually reflects the number of students who later graduate with a degree in the discipline. To increase the pool of computer scientists, it is necessary to provide high school students with opportunities for computer science education so that they enter college already interested in the discipline.
For the CS 10K project, the AP course for computer science is central. AP courses are in high demand in the nation’s high schools, even if these schools often resist adding new courses. Further, the AP program is the primary point of national leverage—rather than going school district by school district to win approval for a computer science curriculum, one can simply invoke the AP computer science standard. Cuny also hopes that the new AP computer science courses will be an impetus for college curriculum reform, much as revisions to the calculus AP test helped drive changes in university teaching of calculus.
The CS 10K project seeks to develop courses that are engaging, acces-
sible, inspiring, rigorous, and focused on the fundamentals of computing and computational thinking. As for content, a set of computational thinking practices are integrated with material built around the computing themes of big ideas, critical concepts, and enduring understanding.
Cuny also pointed to the need for feeder courses to AP programs in high school. She proposed that introductory courses in computing could be built on what schools already teach about computer science. Cuny described one example of such a course in the L.A. Unified School District (LAUSD)—the Exploring Computer Science (ECS) course, taught by Jane Margolis since 2008, and currently taught to about 900 students in 20 schools across Los Angeles.33 In California, this course receives a general elective (“G”) credit, which makes it eligible for college-prep credit.
Cuny reported that this course has generated significant interest from educators around the country and that there have been a number of requests for teacher training for this course. LAUSD has also created a mentoring and coaching program for computer science teachers because they are almost always completely isolated and benefit from having some outside reinforcement as well.
Cuny highlighted a number of university, non-profit, and industry partnerships, including a LAUSD-UCLA-Google partnership, a Georgia Tech-Wayne partnership exploring certification, and the NSF-UTEACH effort combining education majors with STEM majors. In business schools, Georgia Tech is also looking at certifying experienced information technology workers and pairing them up with a teacher in the business school. The University of Delaware-Chester School District project paired Chester District schools up with a service learning group at the University of Delaware that sends graduate students into classrooms to help teachers and kids use laptops in the classroom. Prior to this program, the district’s laptop-for-every-child effort had resulted in hundreds of laptops stacked in school closets because the teachers did not have the training to use them. Finally, Cuny mentioned the National Lab Day project, which works to connect scientists, including computer scientists, with classroom teachers.
In addition to the curriculum development component, Cuny noted other challenges as well, such as getting the new curriculum into the schools, teacher preparation and ongoing professional development, and so on. She particularly called attention to the current patchwork of state standards, credit issues, and certification requirements—in her words, “they are a mess.” Cuny and her colleagues are working with the Association for Computing Machinery (ACM) Education Policy Committee,
the ACM Education Policy Council, and the Computer Science Teachers Association to address some of these standards and certification issues.
Cuny summarized the challenges of introducing rigorous computer science education in high school as follows:
• We need computing classes at the local school level.
• We need standard certification and credit decisions at the state level.
• We need universities to step up and say that they will give credit for these courses.
• We need universities to step up and add computer science to their preferred list of courses for high school applicants.
Last, Cuny said she did not believe that computing and computer science do not fit well into current STEM education initiatives. She noted that as difficult as it is to train teachers who are already teaching computer science to teach even more computing to an increasingly rigorous standard, training science teachers who have little or no incentive to do so is even harder. In the long run, there is value in integrating computing into STEM education, but for now the CS 10K project serves as a kind of discipline-specific “race to the top.”
• What are the goals for teachers and educators to bring computational thinking into classrooms effectively? What milestones do we hope to reach in computational thinking education?
• How should training efforts, support, and engagement be adapted to the varying experience levels of teachers such as pre-service, inducted, and in-service levels?
• What approaches for computational thinking education are most effective for educators teaching at the primary versus middle school versus secondary level? What methods might best serve the generalist teaching approach (multisubject/multidiscipline)? What method might best serve subject specialists?
• How does computational thinking education connect with other subjects? Should computational thinking be integrated into other subjects taught in the classroom?
• What tools are available to support teachers as they teach computational thinking? What needs to be developed?
Michelle Williams, Michigan State University
Walter Allan, Foundation for Blood Research, EcoScienceWorks Project
Jeri Erickson, Foundation for Blood Research, EcoScienceWorks Project
Danny Edelson, National Geographic Society
Committee respondent: Larry Snyder
Michelle Williams of Michigan State University discussed her experiences in facilitating teacher professional development in support of computational thinking-based science education.
Project and Curriculum
Williams and her colleagues originally set out to explore understanding by students in grades 5-7 of genetic inheritance of traits through an NSF career grant for a project titled “Tracing Children’s Developing Understanding of Heredity over Time.” The project curriculum was developed under the Web-based Inquiry Science Environment (WISE) instructional framework and aligned with the state and national science standards set forth in a number of works,34 and it has been adopted by the school district in which the project has been operated.35
Williams argued that learning about genetic inheritance in middle school is a particularly interesting prospect because, although there is ample research at the secondary level indicating that students have many non-normative ideas about the topic, research is needed on middle and upper elementary school students’ understanding of genetics concepts.36
34 “STEM Education Statements and Letters,” website, American Association for the Advancement of Science, available at http://www.aaas.org/spp/cstc/docs/09_06_02education.pdf, last accessed May 23, 2011; Minnesota Department of Education, “2007 Minnesota Mathematics Standards and Benchmarks for Grades K-12,” website, Minnesota Department of Education, available at http://education.state.mn.us/MDE/Academic_Excellence/Academic_Standards/Mathematics/index.html, last accessed May 23, 2011; NRC, 1996, National Science Education Standards, Washington, D.C.: National Academy Press. Available at http://www.nap.edu/catalog.php?record_id=4962. Last accessed February 7, 2011.
35 Because of a recent change in the state curriculum standards that requires students to learn ecology in the sixth grade, Williams and her colleagues had to adjust their planned curriculum to teach ecology in addition to genetic inheritance.
36 Elizabeth Engel Clough and Colin Wood-Robinson, 1985, “Children’s Understanding of Inheritance,” Journal of Biological Education 19(4):304-310; Colin Wood-Robinson, Jenny Lewis, and John Leach, 2000, “Young People’s Understanding of the Nature of Genetic Information in the Cells of an Organism,” Journal of Biological Education 35(1):29-36; Dennis Borboh Kargbo, Edward D. Hobbs, and Gaalen L. Erikson, 1980, “Children’s Belief and Inherited Characteristics,” Journal of Biological Education 14:137-146; and Grady Venville, Susan J. Gribble, and Jennifer Donovan, 2005, “An Exploration of Young Children’s Understandings of Genetics Concepts from Ontological and Epistemological Perspectives,” Science Education 89:614-633.
For example, many students have difficulty understanding the contributions of both parents to the genetic makeup of their offspring.37 Students also have trouble understanding the concept of cells, for example, the structure and functions of cell organelles related to heredity.38 Finally, students often conceptualize gene and trait as being equal, while being unable to distinguish between genotype and phenotype.39
In the fifth- through seventh-grade sequence, students are expected to learn to understand concepts related to cell structure, cell function, mitosis, and biological reproduction (both sexual and asexual), and the notion that heredity is the transmission of genetic information from one generation to the next. For example, fifth-grade students are tasked with investigating why organisms have similar and different features. Seventh-grade students carry out more sophisticated investigations, such as studying Mendel’s law of segregation and using scientific evidence to make claims about the genotype and phenotype of an unidentified parent. At each level, the project provides scaffolding to help students learn how to use evidence to write better scientific explanations.
In Williams’ project, students use animations and visualizations to understand abstract concepts. For example, they use simulations of mitosis to understand phases of cell division, Punnett squares to determine the genotypes and phenotypes of different generations of plants, and the Audrey’s Garden animation to make distinctions between inherited and acquired traits.
The project curriculum calls for substantial collaboration between students and teachers. Some of this collaboration comes in the form of training videos of other teachers who have been involved in WISE in general in other places, showing how they work in their role or how they use the computer as a partner, and so on. Collaboration also occurs during small working group sessions during teacher training. In the classroom, stu-
37 Colin Wood-Robinson, 1994, “Young People’s Ideas About Inheritance and Evolution,” Studies in Science Education 24:29-47.
38 Enrique Banet and Enrique Ayuso, 2000, “Teaching Genetics at Secondary School: A Strategy for Teaching About the Location of Inheritance Information,” Science Education 84(3):313-351; Jenny Lewis and Colin Wood-Robinson, 2000, “Genes, Chromosomes, Cell Division and Inheritance—Do Students See Any Relationship?” International Journal of Science Education 22(2):177-195.
39 Jenny Lewis and U. Kattmann, 2004, “Traits, Genes, Particles and Information: Revisiting Students’ Understandings of Genetics,” International Journal of Science Education 26:195-206.
dents work in pairs and the instructors consult with the students on their work, encouraging them to reflect on their learning by asking questions.
Teacher Professional Development
Williams and her team have invested considerable effort in teacher development. Research supports the proposition that teacher experience and content knowledge are important factors influencing student learning outcomes.40 Furthermore, state and federal educational standards have increased teacher accountability through increased standardized testing and assessment.
To provide sustained professional development for in-service teachers, the project includes half-day sessions for professional development, for which participating teachers receive release time; summer workshop sessions; and after-school professional development meetings. Through these development sessions, teachers can collaborate across grade levels to think about curriculum coherence. They are also able to access science materials and learning technology such as the Wisconsin Fast Plants41 and the Audrey’s Garden programs. Using the WISE Genetic Inheritance Curriculum, teachers in professional development can also think about how to integrate various computational models (e.g., simulations) into their teaching of genetics. They learn how to analyze students’ online work through embedded assessments and across-grade assessment items. Finally, teachers involved in this project interface with an instructional model in the curriculum that scaffolds students in using evidence to support claims.
Williams noted that some of her teachers were to some extent intimidated by the technology used to teach various concepts. She argued that
40 See, for example, Hilda Borko, 2004, “Professional Development and Teacher Learning: Mapping the Terrain,” Educational Researcher 33(8):3-15; Jodie Galosy, Jamie Mikeska, Jeffrey Rozelle, and Suzanne Wilson, 2008, “Characterizing New Science Teacher Support: A Prerequisite for Linking Professional Development to Teacher Knowledge and Practice,” paper presented at the American Educational Research Association Annual Meeting, New York, March 2008; Suzanne Wilson and Jennifer Berne, 1999, “Teacher Learning and the Acquisition of Professional Knowledge: An Examination of Research on Contemporary Professional Development,” Review of Research in Education 24(1):173-209; NRC, 1996, National Science Education Standards, Washington, D.C.: National Academy Press, available at http://www.nap.edu/catalog.php?record_id=4962, last accessed February 7, 2011; NRC, 2006, Taking Science to School: Learning and Teaching Science in Grades K-8, Washington, D.C.: National Academy Press, available at http://books.nap.edu/catalog.php?record_id=11625, last accessed February 7, 2011.
some level of discomfort was to be expected but that to mitigate this concern, she and her colleagues undertook several activities:
• They engaged in discussions with teachers from various grade levels across the district about the technology used.
• They spent significant amounts of time allowing teachers to use and interface with the technology as if they were students. Teachers reflected on and engaged with particular technologies, such as animations or some other types of visualizations, so that they would become accustomed to what their students would be doing. In this context, such reflection helps teachers to think about ways to make the subject clearer to students.
• They enlisted the assistance of teachers who were comfortable with technology to coach the technology-apprehensive teachers.
Williams also suggested that anxiety in one area can be conflated with anxiety in another. In one example, Williams explained that she worked with a former English teacher who had recently moved to science education. The teacher’s discomfort with the technology associated with the project seemed to stem more from “the fact that she doesn’t feel as confident with the science in general, and … technology kind of increases her anxiety.”
Williams discussed the challenges of sustaining professional development activities, which often end when development grants and other outside funding end. This fate did not befall Williams’ project, because the school district itself saw enough value in her program to adopt the project curriculum. She also pointed out that she built teacher support through open communication of outcomes with teachers. And she acknowledged the value of community support, for example, from parents who say they feel that their children are really learning and are excited about studying.
Williams drew several key conclusions from this project:
• It is critical to build relationships with key stakeholders that include principals, administrators, and parents, as well as the community at large.
• Teachers need adequate time to reflect on their teaching.
• Teachers-student collaborations can reduce anxiety for teachers and students.
• Teachers can enhance their own content knowledge and pedagogical knowledge by engaging in conversations about the curriculum and focusing on what their students do as well as what their students learn.
The EcoScienceWorks project was supported by the National Science Foundation ITEST (Information Technology Experiences for Students and Teachers) program to develop a computer-based curriculum and inquiry-based tools to teach ecology and environmental science topics in curricular units that also introduce basic computer modeling. Under the terms of the request for proposals for ITEST projects, the goal was to integrate programming into an existing curriculum but not to add additional content.
The teachers who participated in the EcoScienceWorks development effort were familiar with the use of activity-based lessons, and they were quite enthusiastic about the opportunity to work with educators from Maine Audubon and collaborate and develop field exercises. But they were also interested in using their laptops to better support their student learning. As one of them said, they no longer felt they needed to be “the sage on the stage,” and they were ready to have a more student-centered classroom. Teachers were tasked with writing a unit and lesson plans that would be built around the computer simulations and also with designing a field exercise that would go along with each of the simulations that were part of the software.
The development effort approached the use of programming subliminally, rather than as the primary focus of the effort. Downplaying the use of programming responded to the developers’ concern that some teachers might rebel because the Maine learning standards did not include programming, and thus it would be difficult to justify spending scarce classroom time teaching programming.
Instead, the effort emphasized the development of ecology simulations. Five different simulations, all of them based on Maine habitat and featuring ecology content that the teachers were already teaching in their classroom, were developed. After field-testing these simulations (and the corresponding unit and lesson plans), teachers reported that this simulation-based approach enabled them to teach ecology in their middle school science classrooms more effectively. With this experience behind them, the teachers were more receptive to teaching programming.
Allan and Erickson illustrated their work by focusing on a simulation called Runaway Runoff. In this simulation, students conduct experiments on phosphorus pollution using a simulated lake ecosystem. By collecting and graphing data, they discover the connections between phosphorus
level, algae growth rate, decomposition rate, and oxygen depletion, ultimately illuminating the ecological concept of eutrophication.42
The Runaway Runoff simulation depicts a lake ecosystem, with fish, zooplankton, and algae that are visible to students and bacteria that are invisible to students. In the first experiment, the simulation challenges the students to develop a food web by examining the contents of the digestive tracts of the trout and zooplankton.
In the second experiment, students examine decomposing algae in the lake ecosystem. Specifically, they change the level of phosphorus coming into the ecosystem and see how changes in phosphorus level affect the algae population in the lake and the concentration of dissolved oxygen. The students also learn that there are unseen organisms in this lake—the bacteria that act as decomposers.
In the final experiment, students are asked to predict the impact of increasing levels of phosphorus on the different populations of fish and zooplankton. Initially, students might predict that increasing levels of phosphorus result in increasing levels of algae, which in turn can support increased levels of zooplankton and thus, it would seem, an increase in the trout population. And so they might be surprised when they see that increasing the phosphorus levels leads to declining levels of trout. However, by running the simulations and observing the levels of dissolved oxygen, they can see that as algae increase, the bacteria use up some of the oxygen as they decompose dead algae, thus reducing the viability of the environment for the trout.
The Runaway Runoff simulation enables a cognitive cycle to occur. Students make a prediction; use the simulation for testing, tinkering, and playing; observe what happens; refine their mental model of how the system works; and then make further predictions. On the basis of essays written or posters created by students that describe how runoff affects lake ecology, Allan and Erickson believe that students learn to make fairly sophisticated mental models of the lake ecosystem.
Allan and Erickson also described the “Program a Bunny” environment. In this environment, the bunny is an agent that the student programs to find and eat carrots in a field. The environment is also probabilistic, so that carrots are not always located in the same places in the field, and thus a successfully programmed bunny must account for a degree of randomness in its environment. Students can test different programming strategies in a number of increasingly complex scenarios.
Program a Bunny is supplied with some initial programming and
42 A sample student worksheet from the project can be seen at “Runaway Runoff Exercise 1: Who’s Who,” Worksheet, available at http://simbio.com/files/EBME_WSExamples/RunawayRunoff_WkSh1_example.pdf. Last accessed February 7, 2011.
will run “out of the box.” But the initial programming is, by design, inadequate for bunny success. Thus, students must learn to modify the program. Modification of the program initiates a cognitive cycle similar to that of the Runaway Runoff simulation—the student observes what happens to the bunny’s success in finding carrots, develops a mental model of how the program works, and then thinks of another modification that is intended to further improve the bunny’s performance.
Allan and Erickson also reported that students found it helpful to use a concrete representation of the bunny field—a tarp laid on the floor and marked off with a grid. Students would go back and forth between the tarp and the Program a Bunny simulation, and thus develop a better understanding of the rules needed for programming the bunny.
In the views of Allan and Erickson, the common theme between these two examples—ecology and programming—is that students can see that there are computational rules and logic underlying both environments. They believe that learning ecology through the use of agent-based simulations combined with an agent-based programming challenge provided their middle school students with a rich learning environment for computational thinking.
Danny Edelson oversees the National Geographic Society’s broad-based efforts to improve geographic education in the United States and around the world.43 He characterized the efforts as building “geo-literacy,” the ability to reason effectively about far-reaching decisions—the decisions that affect other people and places that members of 21st century society routinely face. Geo-literacy requires an understanding of how Earth’s interconnected human, ecological, and geophysical systems function, and the ability to apply that understanding to decision making in personal, professional, and civic settings.
Edelson focused on geo-literacy in his presentation, arguing that the skills required for geo-literacy have substantial overlap with those needed in computational thinking. This overlap is rooted partly in the fact that both geography and computer science are disciplines that promote, indeed require, systems thinking.
Specifically, Edelson argued that geo-literacy is essentially a systems view of the world—an understanding of the world as a set of interconnected human and social systems and physical environmental systems (requiring an understanding of both of these elements as systems and
how they interact with each other) and then the ability to apply this understanding in context for the consequential tasks that citizens and workers need to be able to perform in the world.
Edelson stated that he found it is very hard “to disentangle this kind of systems view of the world and geographic reasoning from computational thinking.” He went on to add, “I generated several different questions that I thought were all fascinating for which I have no answers at all. But I would like to use geo-literacy as a case that would be similar to lots of other science, natural science, social science, or other STEM disciplines.”
Edelson posed a number of questions at the geo-literacy/computational thinking interface:
• What is the relationship between this concept of geo-literacy and computational thinking?
• To what extent is this systems view of the world a form of computational thinking in the way it actually plays out in the practice of geography or science or environmental science?
• How are these two things supportive of each other?
• How does computational thinking contribute to development of geo-literacy, and how does an understanding of Earth’s systems contribute to development of computational thinking?
• How, if at all, might being an underdeveloped computational thinker impede learning of geo-literacy, and vice versa?
• How, if at all, might sophisticated computational thinking actually somehow interfere with developing an understanding in a scientific discipline like geography?
Edelson spoke of some of the issues that arise in understanding geographic data. For example, geographic data look continuous on Earth’s surface viewed from afar, but in fact are pixilated when viewed close up. The actual physical situation on the surface is represented by continuous data, but the instruments of the geographer can represent the data only as pixels with rigid and discontinuous borders between them. Edelson reported a conceptual change in students when they go from viewing a map as a continuous representation to understanding a map as being a representation of discrete pixels or cells, with all the positive and negative implications of that for what they actually want to do with the data. (Pixilation means that it is impossible to determine from the data any reading that requires a smaller bin size, such as the ambient temperature on one’s birthday.)
Another issue arises with maps that use different colors to represent different temperatures. Although it makes physical sense to subtract two
temperatures (e.g., January’s temperature from July’s temperature), it does not make much sense to subtract yellow from red. That is, representations on the map cannot be manipulated in the same way as the underlying physical parameters. Edelson found that students could understand this paradox by considering the idea that maps are actually regular arrays of numerical data being represented pictorially. This conceptual step enabled students to understand what it might mean to “subtract January’s temperature from July’s temperature” and when they might want to perform that operation.
In performing analytic work, Edelson noted the computational overtones of working with sets, doing queries, and understanding Boolean logic and Boolean operations. For example, a student might be asked to find counties in the United States whose African American population exceeds the Caucasian population. Such operations are critical to being able to analyze much geographic data effectively.
Sometimes, such operations go beyond manipulating logical relationships but involve set or spatial combinations. For example, a student might want to say, “I’ve got two regions that are outlines on a map; find me the intersection of those two regions,” or, “I have a list of cities that meet one criterion and a list of cities that meet another criterion; show me the intersection of those two lists.” Managing these operations intellectually calls for thinking about them as combinations in one sense and as spatial entities in another sense. That is, this kind of geo-literacy requires students to understand spatial relationships as analogous to a set.
Edelson noted that many problems of interest to the GIS (geographic information system) community involve constraint satisfaction, sometimes with multiple constraints. For example, Edelson and colleagues developed a high school environmental science course in which one of the challenges was to find an appropriate location for a coal-burning power plant in a region of Wisconsin. Students understood that one requirement for a large power plant is the nearby availability of a sufficiently large body of water to provide cooling for it. So they can use the query capability to identify the large lakes in that region. A second consideration is adequate proximity to some mode of transportation that will allow coal to be transported to the power plant. So they need the ability to find the regions that are close to railroads (also known as buffers). And then they need to combine the two requirements—the power plant must be close to a large body of water and to a railroad. In general, solving problems that involve satisfying multiple constraints requires algorithmic thinking.
Edelson closed his presentation by arguing that Earth models are best understood in terms of dynamic and spatial models. He illustrated the point by discussing a NetLogo model for infiltration and runoff processes in a region in the presence of precipitation. Dynamic simulations
demonstrate where the water runs off, and a student can determine the amount of water running off a given point in space at various periods of time. Modifying the runoff processes is necessary to demonstrate the effects of different land use conditions (e.g., a developed community has a surface that is much less permeable, and thus more water runs off, and students see dramatically higher and more rapid runoff showing up in that scenario).
In discussion, Mike Clancy suggested that the causal relationships depicted in these models are similar to the causal relationships entailed in understanding what a computer program actually does in execution, so that, for example, a student needs to understand what causes a program bug or a program to perform in a certain way.
Robert Panoff noted the importance of understanding limitations in the underlying data. In response, Edelson said that in his view, the issue of discrete versus continuous data is a placeholder for the whole issue of data quality, where it comes from, what you can and should be doing with it, and how you question it. He went on to say that anomalous data often catches people’s attention and gives them an opportunity to see what’s going on. He illustrated the point with an example taken from 1992-1993 when he was working with data provided by the National Center for Supercomputing Applications. This data set indicated a very strange warm spot in Europe, and all of the students noticed and asked about it. It turned out to be an anomaly in the models that generated the data or in the device. Many people argued for cleaning up that data so that the anomaly would not show up, but Edelson said that the anomaly was pedagogically valuable because it provided an important teachable moment.
Uri Wilensky asked about the importance of students collecting data themselves and then using that data to try to fit models to that data, rather than using data provided by others. Edelson concurred about the importance of collecting data, but said that he was not sure that data collection itself was computational in nature. He also said he was prepared to rethink that assertion.
• How can learning of computational thinking be assessed?
• What tools are needed to assess learning of computational thinking knowledge and capabilities? Which are available? What needs to be developed?
• What roles should embedded assessments play? What other assessments are needed?
• How can capabilities and skills of individuals be assessed when students are working collaboratively?
• How should the education community measure the success of its efforts? How can we compare the strengths and weaknesses of different efforts?
• What can be learned from efforts currently underway, and from efforts in our country and in other countries?
Paulo Blikstein, Stanford University
Christina Schwarz, Michigan State University
Mike Clancy, University of California, Berkeley
Derek Briggs, University of Colorado, Boulder
Cathy Lachapelle, Museum of Science, Engineering is Elementary Project
Committee respondent: Janet Kolodner
Paulo Blikstein is an assistant professor of education and (by courtesy) computer science at Stanford University. His presentation discussed implementations of computational thinking and computer-based model-building activities within the context of a real undergraduate materials science/engineering classroom. He also shared some of his ideas for assessment of student learning under these circumstances.
Blikstein discussed “restructurations,” a term that refers to the multiple ways of representing and encoding specific knowledge, each of which has different cognitive properties.44 As a result, one representation might be more easily learned than another. The canonical example of a restructuration is that multiplying two numbers that are represented as Roman numerals is much more difficult than multiplying the same two numbers represented as Arabic numerals, even though each operation contains identical content.45 The general lesson, Blikstein noted, is that “how we encode knowledge has a deep impact on how difficult it is to do things.”
Blikstein demonstrated how to apply restructuration to understand-
44 Paulo Blikstein and Uri Wilensky, 2010, “MaterialSim: A Constructionist Agent-based Modeling Approach to Engineering Education.” In M.J. Jacobson and P. Reimann (eds.), Designs for Learning Environments of the Future: International Perspectives from the Learning Sciences. New York: Springer.
45 This example was created by Uri Wilensky and Seymour Papert in recent work.
ing ideal gases in physics. In particular, he noted that the laws of ideal gases traditionally entail equations such as the Maxwell Boltzmann distribution and the relationship between pressure and volume. Blikstein offered an alternative restructuration based on computational thinking that represents a gas as a collection of molecules moving in a gas chamber governed by a simple rule: a molecule will move forward until or unless it bumps into another molecule or wall, at which point it will bounce back. This simple rule applied in this agent-based model results in aggregate behavior of the collection of gas molecules that is identical to that described by the formal gas law equations. Blikstein asserted that the computational representation of the gas laws is simpler and easier to learn than are the equations.
He also described how to reformulate a number of complex concepts from undergraduate-level materials science. Blikstein noted that students in traditional introductory materials science courses encounter new equations at a very rapid rate (one new equation every 150 seconds, not counting intermediate steps in a derivation). It is often that many different equations and models are needed to develop an understanding of a particular concept. These equations must be combined and manipulated to arrive at the final result.
Blikstein argued that an agent-based approach helps students to explore these complex and intertwined concepts more easily, and further that the rules and mechanisms governing the behavior of individual atoms can be used to understand a number of different crystal phenomena in materials science, such as growth, solidification, diffusion, and so on. An example of a relevant mechanism might be for molecules to “look around and see if they are surrounded by different neighbors or equal neighbors” and then cluster or disperse “based on their neighborhood.” Similarly, solidification follows a comparable process except that an “atom in the liquid is kind of going around and looking for solid neighbors where it can attach itself.”
Blikstein also described some of the challenges in assessing and giving objective feedback on open-ended projects with varying levels of complexity and explanatory power. These challenges included the following:
• How do we go about looking at various artifacts and understanding what students are doing?
• How do we assess the relative levels of complexity of the artifacts?
• How do we use assessment to provide feedback to students to improve their models as well as their understanding of concepts?
Blikstein described several tools to facilitate assessment—rubrics and maps, coding patterns over time, and representational shifts.
• Rubrics and maps. Based on the actual code that students generate, maps can be created that track the programming steps and decisions students were making. These maps capture many dimensions of students’ decision making, such as how they define the system, how they define the rules of the system, how they define what the agents are doing, and so on. From these large maps, it is possible to categorize the rules embedded in the system and assess the sophistication of the rules. Evaluators can check each map to see if a student used various affordances of the programming language. For example, is this student using collisions? Is this student using neighborhood checking? Agents moving? Agents seeking agent clusters, walls, or energy? Blikstein argued that the greater the number of affordances appearing in a map, the more sophisticated the underlying model is likely to be, although this measure is not absolute and in many respects depends on the phenomenon being modeled.
• Coding patterns over time. Such patterns document how a student’s code changes over time (e.g., what is added or deleted, what is found each time compilation is attempted). For example, one can count the number of characters in a program submitted for compilation. Some students—typically novice students—exhibit a pattern in which the code is more or less constant for several compilations but then jumps significantly in size. For other students (typically more expert students), there are fewer large increases in code size—code size increases more or less linearly over time. Blikstein asserted that such knowledge can be exploited to help tailor the most effective way to give feedback to different kinds of students.
• Representational shifts. Changes in how a student represents or depicts physical phenomena can indicate differences in the level of sophistication of his or her understanding. For example, Blikstein compared two groups of students, one that had been exposed to computational modeling and one that had not. Each group was asked to sketch the process involved in a scientific phenomenon different from the one they were modeling, such as the impact of a change in temperature. The students with computational modeling experience drew and described a mechanism showing the behavior of the atoms as the temperature changed. Students who were not exposed to the activity instead drew a graphical curve showing the aggregate behavior of the atoms as the temperature changed.
Christina Schwarz, an associate professor in the College of Education at Michigan State University, described her work with elementary and middle school students using scientific modeling and practices. The MoDeLS (Modeling Designs for Learning Science) project works
to involve students in science through the use, revision, and creation of models. Although not explicitly focused on computational models, some of her work, Schwarz believes, may apply to the ongoing dialog about computational thinking.
In the context of her work, a model is an abstract, simplified representation of some phenomenon which could include but is not limited to computational representations. Models also include physical models and diagrammatic models. Modeling involves constructing a representation that embodies aspects of theory or evidence; evaluating that representation or testing it against empirical evidence and scientific theory; using it to illustrate, predict, and explain; and revising the representation.
Schwarz and her colleagues believe that the underlying concepts of modeling are powerful for sense-making and for communication in science. She further noted an overlap between modeling practice and computational thinking, particularly the ideas of abstracting and decomposing systems, testing the model against actual data, and so on.
Schwarz argued that models can make important aspects of science accessible by helping students to understand invisible processes, mechanisms, and components in phenomena. Models promote both subject-matter and epistemological understanding, and they develop systems thinking skills. Most importantly, models can generate predictions and explanations for scientific phenomena.
Schwarz walked through a generic MoDeLS curriculum sequence that would be given to students. The first step is for the researchers to provide some sort of anchoring phenomenon in a scientific context. For example, a fifth grade unit starts with a question like, Would you drink liquid that collected in a solar still?, and continues, “You can’t test it, you don’t want to drink it, because you might get sick, so you have to design an initial model that you can use to begin thinking through what is going on.”
The unit then provides some discussion about the nature and purpose of models. Such dialog is essential to abstracting knowledge for transfer to other kinds of systems and contexts, and to motivate and support the kinds of skills and habits of mind essential to computational thinking.
The third element of the unit is an investigation of the subject phenomenon through data gathering and students’ testing of their models. Students evaluate their models and discuss the criteria for evaluation. Evaluation is thus another strategy for teasing out modeling practice and scientific thinking.
Last, the unit introduces scientific ideas that students can use to revise their models again. Here, students often use visual, diagrammatic models and computer simulations in different ways. Students may look at simulations and then use some of those ideas from the simulations in their diagrammatic models. Model design, revision, and analysis occur in
the context of a small scientific learning community (i.e., their classmates). The students debate data and concepts, as well as evaluate and peer-review each other’s work. Finally, students develop a consensus model at the end of the unit and explore applying the model developed to other contexts that they care about.
Schwarz noted that different science disciplines use different aspects of modeling to explore scientific phenomena. For example, flow diagrams and process diagrams might be most appropriate for modeling relationships between components of a biological system. Most of the MoDeLS effort focuses on various aspects of physical science, but the group is looking at exploring modeling in other areas.
Schwarz uses a four-level learning progression to guide the interpretation of student activities. This progression is continually revised and improved based on their assessment outcomes.
• Level I focuses on students’ reflections on their existing practices of modeling around the idea that children often begin modeling practice by drawing literal illustrations but have yet to really grasp the purpose of or use for models.
• Level II characterizes student use of models and shows that students are constructing and using models to illustrate and explain to an audience how phenomena occur. Although students at Level II are still somewhat literal, they are moving closer to the use of abstraction.
• Level III is even more sophisticated, as students move farther along the literal-to-abstract scale closer to the abstract end of the spectrum.
• Level IV students are constructing and using models spontaneously in a range of domains to help their thinking and problem solving. For example, students might be prompted to consider, before they test their model, how the world would behave. Schwarz argued that this fourth level is most similar to the types of modeling a computer scientist would do.
Schwarz also commented on assessment. Specifically, she noted that her assessments seek evidence in student work of engagement in modeling:
• Around different content knowledge for which students did not receive explicit instruction—to determine what aspects of modeling practices might be used across contexts.
• By applying their models to familiar and less familiar contexts. Schwarz described an example in which a student noted that she was actually applying the condensation and evaporation model to simple experiences at home like boiling water.
• Mapping between representations and the real world, as illustrated by students’ application of their models in a specific context.
• Evaluating and revising their models for items like relevancy or saliency, evidentiary support, communicative power, and so on.
Schwarz and her colleagues use a variety of tools to obtain such evidence. Although there is some use of written pre-test and post-test items involving scientific modeling, they also use reflective interviews with students and in-person or videotaped observations of in-class student interactions. These qualitative instruments are designed to probe content that was both explicitly and not explicitly taught to examine transfer to other disciplines and the time evolution of student modeling practices and thinking.
Nevertheless, she was aware that their assessment efforts had a number of limitations. For example, many young students often see modeling and scientific thinking as a school-only activity that is unrelated to daily life rather than thinking of models as tools useful for their own purposes. Although they understand in principle the notion of evaluating each others’ models according to relevant objective criteria, in practice they sometimes fail to do so in the classroom environment, instead deferring to the classmate they like better or the classmate who is the loudest.
Students also sometimes focus on the external audience when communicating through a model; that is, they may formulate their comments and responses based on what they think their teachers want to hear and what they think are “correct” answers, rather than what they themselves think.
Last, Schwarz noted that pedagogical constraints often result from the curricular and learning approaches determined by the various schools. As an example, Schwarz explained that in one school, the state-wide curriculum mandated that before fifth grade, science teachers are not to discuss phenomena at the cellular or atomic level because they are invisible. In response, Schwarz and her colleagues developed a special unit on evaporation and condensation that was actually an attempt to bridge the project’s elementary learning goals to a particular state guideline prohibiting discussion of atoms.
Mike Clancy, from the Department of Computer Science at the University of California, Berkeley, addressed the topic of assessment for introductory programming classes. His top-level goals for students could be characterized as knowing when given aspects of computational thinking
are applicable, when they are not applicable, and how these aspects are applied when they are applicable.
Clancy described two complementary approaches that are useful in assessment and evaluation. The first approach is based on case studies. A case study is an expert solution to a problem that is accompanied by a narrative of how that solution came to be. The expert, who may be a faculty member or a teaching assistant, provides a solution that addresses questions like why one approach to solving the problem was chosen over another and how problems originating in the first implementation of a solution were fixed (debugged).
Case studies are intended to make the expert’s thinking visible to expose his or her design and development decisions. They demonstrate how abstract concepts are manifest in specific situations. They encourage reflection and self-monitoring, and they support collaborative learning and emphasize links among various problem solutions.
A typical problem might be to find the number of days spanned by two dates in the same year. (This problem arises in the third week of Berkeley’s introductory programming course for non-majors, at which point they have been exposed to conditional programming structures such as “ifs” and how to deal with data but have not yet encountered recursion.)
One approach splits the solution into three situations—those in which the dates occur in the same month, those in which the dates occur in consecutive months, and those in which the dates are further apart. The first two situations are relatively easy to address, but the third is harder. Specifically, the solution for the third case depends on whether the months involved (including the intervening months, if any) have 28, 29, 30, or 31 days. Sometimes it is possible to kludge a solution when the dates are about 2 months apart, but if they are any further apart, a more systematic approach is needed.
At this point, the expert is faced with the question of crafting a solution to the third case that builds on the work already invested in crafting a solution to the first two cases. If one realizes that the day-span computation is essentially a subtraction of one date from another, a sensible approach is to change the representation of the dates involved into things that are easier to subtract—specifically, the date in month-day format is transformed into the number of days past January 1 for the year.
Using this idea, that is, finding a uniform representation for dates, students are then asked to address a number of related problems, such as computing the difference between two heights, finding the number of Saturdays spanned by two dates, and finding the number of days between dates in different centuries. In practice, their task is to understand the original solution (for the problem of computing the number of
days between two dates in the same year) well enough so that they can modify the approach accordingly.
This case study also includes a debugging exercise; debugging is of course another key aspect of computational thinking. Imagine that the day-span program has been accidentally modified (e.g., one word is changed). Given the change in the output of the program as a starting point, students are asked to figure out what was changed and how to fix the problem.
The second approach used for assessment and evaluation involves lab-centric instruction, which emphasizes hands-on lab hours supervised by a teaching assistant rather than lecture and discussion. This instruction entails a variety of traditional programming tasks, such as writing, modifying, and analyzing a program. But because there is more lab time than in most lecture/discussion courses, the course also has room for a number of embedded assessment activities. For example, a lab period often starts with a quiz, and it provides opportunities for self-tests. “Gated collaborations” enable instructors to pose a question to students, and after any given student answers, s/he sees the answers of his or her lab mates.
In this environment, lab instructors can monitor most of what the students are doing and have a window into much of their thinking and not just their finished work. Thus, lab instructors can notice confusion when it occurs and address it immediately to provide targeted tutoring. The result is that instructors can nip confusion and misconceptions in the bud rather than having to wait for them to be revealed in some later venue.
Derek Briggs of the School of Education at the University of Colorado, Boulder, began by suggesting several questions that he believed should guide any assessment of computational thinking. His first question is, What is being assessed? A prerequisite for assessment is a common understanding of the important constructs and concepts of the topic being assessed. In the case of computational thinking, Briggs noted a lack of consensus on its essential elements and commented that even if one isn’t willing to put down a thorough definition of what constitutes computational thinking, there has to be some common ground on the topic. What are the important elements?
Second, he argued for clarity about why the topic is being assessed. Briggs identified several possible reasons for assessing student understanding of a subject:
• Evaluating a program. If a pedagogical activity purports to promote
student learning, the students involved in the activity must be assessed to see if the claimed learning indeed took place.
• Grading of students. In graded courses, a student’s understanding of a topic often relates to the grade s/he receives.
• Diagnosing a student’s understanding of a subject in detail. Pinpointing a student’s misunderstanding of a particular subject-matter point provides feedback to a teacher about how to direct his or her pedagogical efforts to address that particular misunderstanding. For this particular application, multiple concepts of learning progression are helpful. A learning progression can be regarded as an ordered description of a specific student’s understanding of a given concept as that student learns more about it; a description of successively more sophisticated understanding of a concept or ways of reasoning in a content domain; and also an ordered description of a typical student’s understanding of a given concept as students learn more about it.
• Developing a better intellectual understanding of a subject. It sometimes happens that an attempt to assess a student’s understanding of a subject demonstrates that the expert’s understanding of the subject is incomplete, and it is through the act of developing an instrument, and developing questions for students that are intended to elicit information about the subject, that the expert gains insight as to what it is that the expert really meant.
Third, an instrument for the assessment must be appropriate to the purpose of the assessment. For example, if the purpose of the assessment is to grade students, an instrument may need only to record the percentage of correct answers provided by a student. However, if the purpose of the assessment is to diagnose a student’s misunderstandings, the instrument must be constructed in a way that sheds light on the specific nature of those misunderstandings. Briggs also noted that diagnosing student misunderstandings does not necessarily entail open-ended interactions with students—carefully designed multiple-choice items can provide diagnostic information that is as meaningful as or more meaningful than that obtained through open-ended interviews.
Finally, Briggs argued for the importance of validating an instrument, contrasting the notion of validity to the notion of reliability. A valid instrument is one that accurately reflects a student’s knowledge of the specific concepts of interest (i.e., what the investigator really wishes to assess), whereas reliability is concerned with the consistency with which an instrument can produce a given measure. He further noted that low reliability of an instrument was not necessarily problematic in the context of formative evaluations for real-time informing of in-class pedagogy or group-level comparisons.
Cathy Lachapelle, director of research and evaluation for the Engineering is Elementary (EiE) project at the Museum of Science, discussed her assessment and evaluation experiences with that project. EiE is a curriculum development and improvement effort that develops engineering guides and activities for children in grades 1-5.
Assessments of EiE activities are focused on what students learn and measure specific learning objectives.46 Lachapelle noted that there is no existing standard “yardstick” against which to assess student learning about engineering. Thus, assessment efforts compare progress toward learning objectives in an EiE activity group to progress among students in a control group.
Lachapelle suggested that a variety of methods are available for assessing student learning, depending on the purpose of the assessment:
• Class observation that focuses on collecting qualitative data. Such data include information obtained from helping the teacher implement EiE, interviewing students to try to understand their attitudes with respect to the learning objectives, and observing how they perform against the learning objectives. To illustrate, Lachapelle noted that one of the learning objectives is to be able to reason from a model and understand that a model is representing something in the real world. During class observation, assessors talk to the teacher and students to see if the students are grasping the concept. (They might also point out different ways to structure the lessons so that students better understand the learning objectives.) A degree of uniformity in data collection is obtained by using the same standards and criteria in each observation.
• Embedded assessments, which are often used by teachers to understand the pedagogical impact they are having on students as they go along. Embedded assessment can be as simple as examining individual student performance on a particular worksheet, so that a teacher can better understand which students need more help, whether he or she should give clearer instructions, and so on.
• Paper-and-pencil assessments, which are very difficult to construct but provide an excellent source of feedback. EiE typically uses these paper-and-pencil assessments for summative evaluation. A great deal of work is involved in constructing assessments and testing them, piloting them, checking them for reliability, and then using them with hundreds of
46 Not all investigation of student learning requires such objectives—specifically, some research is useful for understanding what students know in general and what they can do on average.
students. For example, developing multiple-choice questions that yield insight into student thinking is sometimes problematic. Lachapelle and her colleagues often ask students how they would answer a question, and unusual or incorrect student answers become alternative choices for answering the question. For example, Lachapelle said, “We asked kids what is the function of leaves in a plant and the kids said, to make food. We would say well why did you choose that answer? And they said because they make salad. You have learned that things are not always as they might seem or as you might expect.” Ultimately, they discarded that particular question.
• Performance assessments, which can be used either by teachers for their own understanding of what their students are learning (in formative evaluation) or by the curriculum development team as a summative evaluation of what students learned. This type of testing is also time- and resource-intensive because the assessment must be administered and scored. EiE uses this type of assessment in the final project design exercise.
Speaking more broadly, Lachapelle addressed formative and summative evaluations in the EiE project. All work products require regular evaluation, including teacher guides, student exercises and activities, the learning goals, and teacher professional development materials and activities.
As is usually the case, formative evaluation is used to inform the development and improvement of products and processes. In the EiE context, formative evaluations seek evidence of growth in students’ understanding and skills as stated in EiE learning objectives, determine the age-appropriateness of lessons and activities, and examine the ease of use of lessons and materials. Formative evaluation for EiE usually relies on feedback from teachers and students. Therefore, it is critical that researchers make sure that the lines of communication are open and that feedback received is considered in light of the project’s set evaluation criteria. Lachapelle explained that if a researcher receives feedback that the project was great but too troublesome to clean up afterward or the standard for an activity was that a teacher be able to manage the activity the following year without any support staff, the activity would be revised accordingly.
The purpose of summative evaluation is to provide evidence to EiE stakeholders, including funders, school districts, teachers, and parents, that implementation of specific EiE activity is worthwhile. Robert Panoff was particularly struck by this concept of “being worthwhile” and argued that this concept is a key factor in terms of scaling, adoptability, and motivation for using the materials or the exercises. Lachapelle stated that one criterion for this type of evaluation is to show improved learning of target concepts among students as compared to a control group of students. The
control group consists of students in a comparable classroom taught the same science and engineering topics but not with EiE curriculum materials and tools. In the ideal scenario, EiE has a large pool of teachers from which part are admitted to the EiE project and the other part remain as control groups. This process does not always work because of constraints of funding and time. Another example criterion is that teachers express increased efficacy and interest in teaching engineering to their students.
Randomized, controlled studies with external evaluators are the preferred method for evaluating and comparing efforts in education, said Lachapelle. NSF, for example, prefers this approach when seeking summative assessments in projects it funds. Unfortunately this type of assessment is very expensive to execute because there is usually a need for a fairly large number of students in order to randomize whole classrooms into different testing groups. Also external evaluators are an added cost and bear their own pros and cons. Although external evaluators are likely to be more objective in their assessments, they do not have the advantage of an ongoing relationship with the teachers, administrators, and students whom they are engaging and thus may miss subtleties that more familiar evaluators might observe.
In her discussion, Lachapelle cautioned that assessments and evaluations of computational thinking activities and materials require clearly specified learning objectives, which in turn require some community consensus regarding the content of computational thinking—that is, what is it that the community wants children at various ages to know (from early elementary school to college)? In the EiE context, some learning objectives include being able to identify a process, to explain what a process is in an engineering context, and to explain why the order of steps in a process is important.
She also argued that the learning objectives should align with psychological and developmental learning progressions, since doing so provides some guidance over time as to where students should be at each stage. Thus, learning objectives are and should be the object of research and design. She noted that EiE does extensive literature searches and local interviews with kids before beginning the design of each of its units in order to learn more about what kids know. For example, for a unit on sinking and floating, developers would do a literature search and then interview local students by asking them things like, “Do you know what it means to float?,” “Do you understand why things float?,” and so on.
Finally, Lachapelle commented that their assessments are also designed to address student attitudes toward science and engineering. Broadly speaking, these assessments indicate that girls tend be interested in engineering things when framed as helping to improve people’s lives and boys tend to be interested in engineering things when framed in terms of constructing engineering artifacts.