IDR Team Summary 5
Why are human-designed biological circuits and devices fragile and inaccurate relative to their natural counterparts?
Three characteristic features of natural biological systems are robustness, adaptability, and redundancy. Natural systems are remarkably resistant to failure induced by changes in component abundance or activity (robustness), yet they maintain an underlying flexibility required to allow them to adjust to new environments (adaptability and redundancy). By contrast, many synthetic systems lack robustness, especially when compared to their natural counterparts that perform a similar task. Adaptability and redundancy are typically not considered. Two examples of synthetic systems that lack certain aspects of robustness are:
Ajo-Franklin et al. (see reading references) designed and characterized an elegant memory device in yeast that is based on a synthetic transcriptional cascade. This device does exhibit memory, but is sensitive to dilution of the autoactivator component during growth and requires “tuning” of growth rate by changes in media to maintain bistability.
Elowitz and Leibler (see reading references) constructed an oscillator based on a transcriptional cascade and found that only a fraction of cells exhibited oscillations; additionally, they observed significant variation in the period and amplitude between cells in a population. In comparison, the transcriptional oscillations associated with the circadian clock are far more robust.
What can we learn from comparisons of designed systems and their natural counterparts?
Comparison of synthetic systems with those of their natural counterparts can be extremely informative—such studies sometimes provide insights that can be used to improve the function and design of engineered systems. Additionally, these comparisons can reveal the presence of previously unappreciated complexity and phenomena. Such an example comes from the Elowitz and Leibler study mentioned above, where these authors recognized that the oscillator was “noisy” and speculated that such noise might arise from stochastic fluctuations in transcription in cells. This observation was the motivation for the development of what turned out to be a highly influential method for quantifying stochastic fluctuations in gene expression, and the demonstration that transcription in E. coli is indeed noisy.
Can we harness the power of evolution to shape and design more robust systems?
The forces of evolution shape natural systems. In the process of natural selection, a population of cells or organisms effectively explores parameter space in a manner that allows for the discovery of biological circuits that are robust, adaptable and redundant. In contrast, many efforts in synthetic biology are engineering-based and exploit the modular nature of biology to assemble functioning circuits from sets of well-characterized component parts. It will be interesting to see if it is possible to use experimental evolution to discover or tune synthetic circuits that exhibit robustness, adaptability, and the redundancy seen in natural systems.
What are “design principles” observed in natural circuits that have not been implemented in synthetic circuits and that may increase the reliability and robustness of engineered circuits?
How can these new design principles be most effectively implemented into human-designed circuits? Are new tools required?
Are new characterization methods and strategies required in order to measure properties such as robustness and adaptability?
How can evolution be effectively integrated as a design principle into synthetic circuits?
Ajo-Franklin CM, Drubin DA, Eskin JA, Gee E, Landgraf D, Phillips I, Silver PA. Rational design of memory in eukaryotic cells. Genes Dev 2007;21:2271: http://genesdev.cshlp.org/content/21/18/2271.full. Accessed online 28 July 2009.
Elowitz MB and Leibler S. A synthetic oscillatory network of transcriptional regulators. Nature 2000;403:335: http://www.nature.com/nature/journal/v403/n6767/full/403335a0.html. Accessed online 28 July 2009.
IDR TEAM MEMBERS
Yaakov Benenson, Harvard University
Barry Canton, Ginkgo BioWorks
Peter Carr, Massachusetts Institute of Technology
Tanguy Chouard, Nature
William Foster, Weill-Cornell Medical School & The University of Houston
Henry Hess, Columbia University
Ching-Hwa Kiang, Rice University
Eric Klavins, University of Washington
Richard Murray, California Institute of Technology
Maria Pellegrini, W.M. Keck Foundation
Jeffery Schloss, National Human Genome Research Institute, NIH
Georg Seelig, University of Washington
David Sprinzak, California Institute of Technology
Nancy Sung, Burroughs Wellcome Fund
Stephanie W. Schupska, University of Georgia
IDR TEAM SUMMARY
By Stephanie W. Schupska, Graduate Science Writing Student, University of Georgia
“The inherent complexity of biological systems defies reliable engineering today. Engineering needs iteration and debugging: It is too early for definitive comparisons to nature—and to make this judgment because we know too little about nature and how natural systems are designed. Our tools for synthetic systems are still rudimentary.” This was the conclusion an Interdisciplinary Research (IDR) team came to after intense discussion
coupled with creative tension and brainstorming to answer the question: Why are human devices fragile and inaccurate relative to their natural counterparts?
In addition to thinking about fragility, the IDR team also spent its time at the 2009 National Academies Keck Futures Initiative Conference comparing human-designed devices with the three characteristic features of natural biological systems: robustness, adaptability, and redundancy. Natural systems’ resistance to failure is achieved by changes in component abundance or activity (robustness). At the same time, they maintain the flexibility they need to adjust to new environments (adaptability and redundancy). In contrast, many synthetic systems lack robustness, especially when compared to their natural counterparts that perform a similar task. Typically, adaptability and redundancy are not considered.
But the meeting of the minds—ranging from professional engineers, media lab scientists, physicists, synthetic biologists, and others—didn’t stop with its conclusion that it’s too early to compare synthetic systems with nature. The team went into detail about three specific areas likely to drive the field of synthetic biology in the future.
First, they discussed the trade-offs that could lead to fragility in human-designed circuits, which they said comes in part from unplanned interactions. This boiled down to the question of whether the engineering approach itself—and its rigidity—is the source of fragility.
Secondly, they considered how to systematically (and more efficiently) construct robust circuits. They decided a wind-tunnel-like testing ground and a rapid comparisons approach were the best route to take.
Finally, they decided that the study of the failure modes of existing systems was a good way to derive design rules.
The team also discussed the possibility of either holding a contest that would specifically address synthetic biology or funding a new section of the iGEM competition. iGEM, which stands for International Genetically Engineered Machine, is a biological challenge considered the premier student synthetic biology competition.
Source of Fragility
Synthetic biology is still in its earliest stages, much like the first bulky transistor compared to the current multi-trillion transistor model of today’s Internet. Even now many scientists are unsure how the World Wide Web really works. In the same way, we’ve just barely begun to touch on what
can be achieved through synthetic biology. Because there is still a deep gap between what can be envisioned and what can be accomplished, getting human-made circuits to work as well as natural ones continues to be a problem. Whether that is because the engineering methods for new circuits are still unreliable, or because the approaches themselves are basically faulty, remains to be seen.
For example, is modularity (a concept found extensively in complex engineered systems) the problem, or is it the specific ways that engineers want to introduce orderly structure into their designs? While biology exhibits a variety of forms of modularity, the discussion in the meeting focused more on whether the sort of modularity that engineers typically introduce in their designs might not be appropriate in synthetic biological systems.
When it comes to engineering new circuits, suboptimal design often results in decreased efficiency and performance. But does the modularity used in these designs lead to an increased fragility of the system? The group’s answer was no. There’s no obvious change in fragility as a result of modularity. However, they did consider whether more interfaces equal more fragility, but found that is a difficult question to answer, specifically because it is unknown what exactly confers robustness to natural systems.
The following options were considered in trying to understand robustness and what confers robustness on natural systems:
Flexibility/noisiness of individual parts
Nature is messy, but it works. Because nature often exploits variations in noise to get the job done, it’s something that must be considered in synthetic biology, and considered not necessarily as a frustrating, undesired element but an essential (or useful) one.
When it comes to making synthesized biological circuits more accurate and less fragile, the IDR team decided a more detailed analysis of the engineering approach is needed. But, first, let’s take another look at the Internet example, and how a future system could work.
Scientists in the 1950s brought us the first transistor, a messy-looking
system that somehow worked. About 1969, these tiny, interconnected devices brought us the beginning of the incredibly robust and useful Internet. Its path is a story that engineers constantly look to. The question they ask is “how are we going to do a similar engineering feat in the future?” The answer, many believe, lies in synthetic biology.
Right now, it’s nearly impossible to take unreliable biological pieces and create extremely reliable systems from them, especially when scientists don’t yet understand the biological pieces they are working with at the level of the first transistor. Understanding is more along the lines of Benjamin Franklin looking at a sparking Leyden jar, trying to figure out what causes electricity.
And they want to add a wind tunnel to the sparking jar. The goal is to understand what things can mess up a circuit by looking at each circuit in a one-at-a-time, principled way—perhaps in something like a wind tunnel.
Wind tunnels are commonly used to test aircraft, automobiles and other aerodynamic structures for both their strengths and their flaws. A wind tunnel in synthetic biology is still more of a concept than a real testing ground, but researchers are optimistic.
A wind tunnel would allow them to develop carefully characterized test environments for measuring the functionality of cell-free extracts, which are liquids that contain cell parts but no intact cells, and minimal cells, which are artificial cells that contain the smallest number of parts a cell needs to exist. They could systematically test designs in the presence of known troublemakers, such as more complex systems, and redesign based on an understanding of what went wrong.
They could also test the circuit they’ve built inside a cell, and, if it works, stop there. If not, they could update the environment around the cell to add in additional useful affects.
An example of this is in trying to develop a stable oscillator for mammalian cells. First, one would start with a few designs for oscillators. Next, these would be tested in a cell-free extract and then in increasingly complex environments until something breaks or meets the researchers’ specifications. Finally, the circuit would be placed in an actual cell and tested. The “wind tunnel” test environment would be adjusted as needed.
Unlike the wind tunnel approach that looks at one circuit at a time, rapid comparisons would allow researchers to compare different compo-
nents to see which work better. Using the stable oscillator for mammalian cells as an example, rapid comparison would look for combinations of circuit elements for oscillators. These could include activators, repressors and combinatorial promoters. Circuits would then be rapidly introduced into mammalian cells, in a way that allows comparisons (for example, control integration loci). The oscillators produced would go through a high throughput screen. A smaller number of the best oscillators would be selected and then analyzed in detail.
Oscillators aside, the process would work like this: Researchers would generate a component library and architectural alternatives for whatever function they desire. These would be a group of component properties and preselected architectures. They would construct all the possible combinatorial circuits from the library and architectures, introduce these into cells and check for function. They would then compare not only the performance of the circuit, but also look for robustness and other desired characteristics. Finally, they would analyze and explain the winning circuits.
Usually, with the outside forces of the directed evolutionary process, scientists don’t know why a system works. There’s no control over or full understanding of the final product, something that engineers (and many other scientists) find to be a frustrating challenge. The rapid comparisons approach would simplify the problem of what directed evolution did to the circuit.
The whole idea is wrapped in two answerable questions: Why is the winner robust, and why were the losers not?
The group wanted to understand what a failure is in biological engineering by first defining success. It’s not good enough if it gets me a publication, they said. The system has to meet a set of performance metrics.
So, what are performance metrics in synthetic biology? They include duration of operation, homogeneity, and robustness to external variations. Failure is when one or more of these are not met. Finding out why is the hard part.
Enter the systematic approach to failure analysis. How long will a system operate before failure and why does it fail are questions that need to be answered.
This mode of experimentation requires researchers to take a subset of engineered groups, run their systems until they break and examine why they
break. Part of this approach’s goal is to figure out what the cell’s mechanisms are for breaking the systems that researchers are trying to build. The next step would be to redesign some of the systems and increase their genetic stability.
The failure mode frameworks they suggested involve performance timescales, metabolic load, noise, intrinsic versus extrinsic versus crosstalk failures and dependence on system size.
They suggest four types of experiments to test failure mode:
Studies of genetic stability as a function of load.
Comparison between analogous natural and synthetic systems.
Case-by-case analysis of extrinsic interference.
Host optimization to improve robustness.
In 2004, the DARPA (Defense Advanced Research Projects Agency) Grand Challenge dared contestants to build a vehicle that could make it across the Mohave Desert. The first year, everyone failed. The second year, five vehicles made it across. The third year, teams were faced with an even more complex problem: To drive an unmanned vehicle 60 miles through an urban area while obeying all traffic signals. The prize was $2 million. Six teams finished.
A biological challenge called iGEM already exists. For the contest, student teams work with a kit of biological parts and new parts they design to build biological systems and operate them in living cells. Unfortunately, the competition doesn’t take aim at robustness, research on which is sorely needed in synthetic biology. The group proposed a contest based on robustness. It could either be incorporated as a part of iGEM or introduced as a new grand challenge.
It would involve, for example, students making an oscillator, placing it in a plasmid and then testing that oscillator in 10 different strains of E. coli and seeing which one works best. The challenge, like the DARPA Challenge, would be fine-tuned from year to year as progress is made.