To begin developing an ethics framework for human activities involving risk and uncertainty, a series of questions needs to be asked to help define the objectives and parameters: What are the risks involved?; What is their likelihood and magnitude?; and Who bears them? What are the potential benefits of the targeted activities?; To whom will the benefits accrue?; and What values are to be safeguarded in the process? Who are the stakeholders involved?; To whom should the framework apply?; and Under what conditions?
As outlined in the 2001 Institute of Medicine report Safe Passage, the new challenges that will be faced in long duration and exploration spaceflight necessitate a relook at the ethics principles for these missions:
Current ethical standards for clinical research and practice with astronauts were developed in an era of short space missions when repeat missions were the norm and a return to Earth within days was possible. In future missions beyond Earth orbit, however, a diverse group of astronauts will travel to unexplored destinations for prolonged periods of time. Contact with Earth will be delayed, and a rapid return will be impossible. Long-duration missions beyond Earth orbit, space colony habitation, or interplanetary travel will create special circumstances for which ethical standards developed for terrestrial medical care and research may be inadequate for astronauts. These ethical standards may require reevaluation. (IOM, 2001, p. 173)
Astronauts participating in long duration and exploration spaceflights will face a number of significant risks, including the impairment of
health, as elaborated in previous chapters. Other individuals and groups are also potentially affected by the health consequences of these missions, including astronauts’ families, astronauts hoping to participate in future missions, and the National Aeronautics and Space Administration (NASA) as the institution responsible for the human spaceflight program. NASA as a governmental entity has institutional responsibilities to manage risk effectively and derive maximum benefit from public expenditures. As part of that responsibility, NASA confronts the potential threats to ongoing and future missions that could result from emergent health problems in crew members. In turn, the halting or slowing of space exploration also affects the space program, its near-and longer-term goals, and those involved in the program, including astronauts preparing for future missions.
More broadly still, there are many important ways that the American public may be affected as stakeholders by the potential health consequences of long duration and exploration space missions. One related consideration concerns the ways that space exploration resonates with cultural and national values. Space missions, particularly those that are novel, are often the focus of national attention, and astronauts are perhaps the most visible embodiments of the cultural and national significance of space exploration. Successful missions, with the safe and healthy return of astronauts, represent great achievements and are a source of national pride. Despite risks to human life and health during exploration, many nations and their people have continued to explore. The growth of the commercial human space industry speaks to this imperative. Additionally, despite the well-known risks of spaceflight, including loss of life during both the Apollo and Space Shuttle programs, the recent astronaut applicant pool was one of the largest in history, with 8 astronauts selected out of more than 6,000 applications (NASA, 2013b). However, while there may not be known limits to the human imagination and initiative for exploration, the public still expects that NASA and the new commercial space companies will invest appropriately in protecting the welfare of crew and passengers. Harms to astronauts, whether as a result of launch catastrophes, errors, or unanticipated risks, could threaten the confidence that stakeholders place in the institutions charged with carrying out these missions and could jeopardize funding support for specific programs. That confidence is all the more important to safeguard where an institution’s activities serve a unique social mission, and where there are no alternative institutions that can accomplish the same objectives and goals. Finally, the public’s confidence extends
beyond NASA as the primary institution involved in spaceflight to a more general trust in government.
NASA’s health standards are based on current knowledge of the health risks and are updated as new knowledge becomes available. However, in contemplating long duration and exploration human spaceflight missions, NASA has to make decisions on how to handle missions in which those standards could not be met and astronauts could be exposed to higher levels of risk. Thus, NASA confronts three levels of decision making regarding health standards. In this decision framework (described more fully in Chapter 6), the first and broadest decision is whether and under what conditions missions that are likely to involve greater risks to astronaut health and safety than health standards allow are ethically acceptable. As described in previous chapters, long duration and exploration missions will entail significant risk and are almost certain not to meet some of NASA’s health standards, as well as having the potential for confronting new and unknown risks for which future health standards will need to be set. In cases where health risks are recognized and health standards exist but proposed missions will likely not meet those limits, NASA must choose among three options: (1) apply the standards (thus foreclosing such missions given existing risk mitigation capabilities), (2) grant exceptions for individual missions, or (3) create new health standards that would apply only to long duration and exploration missions. The committee makes recommendations about the appropriate course of action among these options in Chapter 6.
Should NASA decide that long duration and exploration missions are ethically justifiable, the next level involves consideration of the design of a specific mission in order to meet the ethics principles and obligations required to make it acceptable to fly. Specific missions will carry unique potential benefits and opportunities, as well as their own risks and challenges. Evaluation of specific missions will consider factors such as characteristics of the destination, mission goals, duration (dependent on destination, propulsion systems, and system reliabilities), health risks, environmental risks, and feasibility of risk mitigation. Risks and risk mitigation strategies will be assessed for all elements of the mission.
Assuming a particular mission can be designed to meet the criteria of ethical acceptability, the third level of decision making concerns the criteria and process for ethically acceptable selection of individual astronauts and composition of the crew. Such recruitment requires an acknowledgment that individuals will vary in their susceptibilities to risk and the skills they bring to each mission.
This chapter provides a set of ethics principles that can guide the negotiation of each of the three levels of decision making. Each of the principles is relevant to decisions regarding the health standards—for example, whether a given risk level is acceptable in light of impact on astronaut health, available mitigation strategies, and anticipated benefits from the mission. Chapter 6 begins with a description of the processes that reflect the application of the ethical principles—for example, the process by which health standards are made and refined, as well as the process by which individual astronauts consider decisions about acceptable risk and their participation in missions. The development of the principles and processes for their application are designed to take into consideration the inherent uncertainty and unknowns in long duration and exploration spaceflight.
The sections below identify and discuss the principles of avoiding harm, providing benefit, determining favorable balance of risk and benefit, respect for autonomy, fairness, and fidelity. Each section begins with a short description of the principle, followed by some brief examples of how the principle is used and applied, and concludes with a focus on the principle in the context of long duration and exploration spaceflight. The following chapter examines the responsibilities needed for their implementation (informed decision making, continuous learning, independent analysis, transparency) and recommends a decision-framework for applying the ethics principles and responsibilities in setting and implementing health standards for long duration and exploration spaceflight.
There are numerous possible approaches to analyzing and addressing ethical issues, in whatever area of human endeavor they are faced. Among the challenges of reaching consensus around general approaches to addressing ethical issues is disagreement at the level of ethical theory. Over the course of centuries, philosophers have debated the relative merits and shortcomings of the major theories of Western moral and political philosophy: utilitarianism, duty-based approaches, virtue-based theories, and others. One successful approach to avoiding the need for a single ethical theory is to focus on mid-level principles rather than the theory to which they belong or from which they are derived. This approach has proven to be particularly successful when used by expert committees or commissions made up of individuals with diverse commitments to find
common ground on how best to approach challenging ethical issues in the context of public policy. This was the approach and reasoning used by such landmark commissions as the National Commission for the Protection of Human Subjects in Biomedical and Behavioral research and the principle-based foundation it articulated in The Belmont Report (HEW, 1979). The committee adopted a principle-based approach in recognition of its accessibility and applicability for its task. Throughout the discussion of principles and their applications there are references to duties and responsibilities, and it is the committee’s view that such duties are consistent not only with the principles identified but with a range of ethical theories as well, without any intended or unintended endorsement of a particular ethical theory.
The ethics principles described below draw on bioethics principles explored in the context of protection of individuals who participate in research. They have been refined over decades (HEW, 1979; Beauchamp and Childress, 2013). Their basis and the extent to which they represent a sufficient moral framework have been debated, but proponents argue that such principles are at least reflective of the so-called common morality (Gert, 1998; Beauchamp, 2003). The first three of these principles—avoiding harm, providing benefit, and favorable risk and benefit balance—are important both individually and as they relate to each other, as described below. The other three principles—respect for autonomy, fairness, and fidelity—each represent concepts that underpin an important aspect of the ethics of health standards in the context of long duration and exploration spaceflight. The principles were not created for this context nor are they unique to it, but rather each principle has been articulated and refined through a collection of scholarship and through the lessons learned in the application of these principles in a range of policy arenas including clinical medicine, biomedical research, laboratory science, public health, and health policy.
Much of the discussion of the ethics principles in this report draws on the approach and lessons gained in examining the ethics of research involving human participants, which shares at least some parallels with the issues raised in consideration of the health risks and mission benefits experienced in the course of human spaceflight. For example, in much of the research with human participants, the research participants agree to bear the health risks, some of which may be unknown or uncertain, for benefits that will be realized largely, if not exclusively, by society. Potential participants are asked to make a decision through the process of informed consent that includes disclosure of information and voluntary
agreement to participate. Similarly, astronauts bear the risks, some of which are unknown or uncertain, for benefits that may be realized by the government and the general public; their participation is voluntary and based on shared decision making informed by disclosures and best available analyses.
The ethics principles identified as foundational to research with human participants are the legacy of a series of highly publicized occurrences of systematic violations of human participants in social science and medical research experiments. The disclosure of these exploitive uses of human participants led to the passage of the 1974 National Research Act (P.L. 93-348), which created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. One of the Commission’s most important and influential publications, The Belmont Report, identified three fundamental ethical principles: respect for persons, beneficence and justice. It also identified processes for their applications: informed consent, assessment of risks and benefits, and selection of participants. These processes provide an analytical framework for conducting biomedical and behavioral research involving human participation (HEW, 1979).
Respect for persons reflects a belief that all individuals must be treated as autonomous agents, and those with reduced autonomy should be protected. Decisions to participate in research should be made voluntarily and with adequate information. Beneficence requires that research does not harm those who participate, maximizes potential benefits, and minimizes potential harm to participants. Investigators, therefore, have a responsibility to gather relevant information and conduct adequate assessment of risks and benefits, and to proceed only if there appears to be a “favorable ratio.” Justice or “fairness in distribution” includes two prongs: (1) the selection of research participants be based on criteria specific to the research question rather than a function of availability, opportunity, or compromised position (e.g., socioeconomic vulnerability, poor health, age, or other disadvantage or incapacity); and (2) potential benefits of research be equitably distributed.
A fundamental principle in moral philosophy is nonmaleficence, which identifies ethical duties to avoid causing harm to others. Most analyses of the principle include additional ethical duties to remove existing harms and prevent harms that may be likely to occur. The principle
of avoiding harm to others has a clear place in Western moral philosophy, whether from moral theories based on duties (Kant, 1998), utility (Mill, 1879), virtue (Aristotle, 2009), moral intuition, or more recent approaches based on a so-called duty of care in which it is argued that moral obligations derive from and relate to our relationships with others. In concrete terms, avoiding harm means that individuals have moral obligations (meaning they must be done as a matter of morality) to avoid causing harm to others, with harm generally defined as causing a setback to interests (Feinberg, 1986). Such setbacks to interest can include physical harm, psychological harm, harm to property, financial harm, and others. For example, all individuals have moral duties not to hit others with baseball bats, take their money, damage or steal their property, and so on—lessons we learn starting early in life.
Less obvious are the related moral duties to prevent or remove harm. Obligations to prevent harm are closely related but not entirely overlapping with duties of avoidance. Failing to erect a fence around a swimming pool in an area frequented by small children is by any account irresponsible, but it is not equivalent to pushing a small child into the pool. One is failing to prevent a possible harm from occurring, while the second would be failing to avoid causing harm; both are moral failings, but they are distinct, nonetheless. Removing harm has a different status yet again, and to continue the example, would entail obligations to rescue the struggling child who has fallen in the pool. Does a passerby have a moral duty to rescue? It is hoped that people will come to the aid of others but society expects it only in a limited range of circumstances such as when the risk of rescue is consistent with professional duties or the risk of the action required is far outweighed by its likely benefit. Removing harm may be considered as a part of the principle of beneficence (doing good for others) rather than as part of a principle to avoid harm.
The principle of avoiding harm inevitably leads to questions about the extent to which a person or entity must act to reduce risks or prevent harm to another in order to have fulfilled the obligation, particularly in the face of uncertain risks. The course of public policy making in the United States offers numerous examples related to reducing or limiting the risk of harm. In common law, the duty of care includes a responsibility to prevent unreasonable loss or harm to others through an overt act or omission. In Industrial Union Dept., AFL-CIO v. American Petroleum Institute, the U.S. Supreme Court ruled that the Occupational Safety and Health Administration’s (OSHA’s) benzene exposure limits had to be supported by substantial evidence that indicated “a significant risk” of
harm was more likely than not, and that it was reasonable to “use conservative assumptions … risking error on the side of overprotection, rather than underprotection.”1
In some cases, such as long duration and exploration spaceflights, there are many unknowns and a great degree of uncertainty regarding the nature and extent of the risks. For those types of missions, there will probably be both “known unknowns” (known risks of unknown extent, i.e., galactic cosmic radiation) and “unknown unknowns” (unanticipated risks). In such cases, the unknowns need to be addressed, to the extent feasible, in policy and program decisions using an iterative approach to worker protection (or protection of the general public) (see Box 5-1). There are no failsafe measures for neutralizing “unknown unknowns;” rather, the task is to minimize and mitigate them and to learn from prior flights to the extent possible.
Addressing Uncertainty in Avoiding Harm
Reasonably reliable, frequency-based risk probabilities used in risk assessment are often not available when activities involve novel technologies, exposures, or previously unrecognized harms. In such circumstances, policy makers often use cautious health risk estimates based on upper boundary estimates. A cautionary approach favors initial emphasis on protection with gradual risk escalation that is based on a commitment to continuous learning and using the experience and knowledge gained to inform future decisions.
This approach incorporates unquantifiable but credible prospects of catastrophic harm in decision making. Because unexpected, irreversible harms are the types of events that generate significant social controversy, precautionary decision making can protect both the welfare of those exposed to health risks as well as the integrity of institutions charged with implementing a risky activity. Second, cautionary approaches allow risky activities to be structured in a way that diminishes uncertainty over time. Further research and cautious forays into uncertain realms allow collection of evidence that can inform more quantitative risk-benefit assessments. Lastly, caution—if applied appropriately—distributes risk and benefit more fairly. Often, risks associated with exposure and costs associated with their remediation are borne by different individuals. Where risks associated with a health exposure are involuntarily endured, precaution provides a shield against unjust burden and provides the opportunity to consider the fair distribution of risks and benefits.
Drug developers and regulators negotiate uncertainties in two ways. First, they often employ “safety factors” when selecting initial doses to give to patients. For example, where novel drugs are given to patients for the
1448 U.S. 607, 656 (1980).
first time in drug development, drug developers often introduce margins of safety into calculations of the starting dose to account for the possibility that toxicities may have gone undetected in prior animal studies. Second, they often stagger dosing between patients and across cohorts of patients to learn from the experiences and fully inform subsequent research. For example, the first patient’s response to a drug might be monitored before a drug is given to a second patient; effects at low doses will be observed before groups of patients are given higher doses.
At NASA, as in many organizations where risks are uncertain, severe in magnitude, and irreversible, margins of safety are incorporated into risk estimates and are built into engineering and planning decisions. Additionally, research and continuous improvement cycles are implemented to ensure learning from experience and the escalation of risks as appropriate and ethically acceptable.
Putting the Principle in Context
In the context of professional or institutional rather than personal duties, obligations to avoid or prevent harm are equally important though often complicated to carry out. For example, when, if ever, is it acceptable for a physician to put a patient at risk for harm? Undergoing surgery for a life-threatening condition is one common example of such a tension. The usual analysis is that the benefit of saving the patient’s life justifies the potential surgical risks, such as harms due to incisions and anesthesia—the patient both bears the risk and receives the benefit that comes with it. Even if the patient is fully voluntary in his or her agreement to undergo the surgery, the decision about the surgical technique and approach is made by the surgeon. That said, even such a clear-cut justification is subject to the patient’s right to autonomous decision making, which includes accepting the potentially life-threatening consequences of foregoing surgery if it is the patient’s wish to do so.
In other contexts, the harm that individuals are exposed to may be outweighed by the proposed benefits, but those potential benefits may accrue to others and not to those who are exposed to harm, as in early-phase drug development trials in humans. In such examples, there must be increased attention to obligations to avoid harm because there may not be any offsetting benefits to participating individuals and there may be circumstances that do not permit the research to go forward without further modification to risk. In settings where individuals are placed in harm’s way for the benefit of others or society in general, policies of avoidance, protection, and removal of harm are often employed, whether by minimizing risk for research participants, requiring safety equipment
and protective gear for firefighters, or limiting the type and duration of risky work for oil rig workers.
Focusing on the Principle in the Context of Spaceflight
Human spaceflight is undertaken with the knowledge that it exposes others (astronauts) to certain and uncertain risks of harm for benefits that will accrue largely, if not exclusively, to society as a whole, and so bears greater resemblance to research without potential benefits to those individuals who participate than it does to medical treatments that may bring risks along with desired benefits. This dynamic makes justification of the risks of human spaceflight more demanding than would be the case where risky undertakings have the potential to more directly benefit those assuming the risk. In the terrestrial context, where it may not be possible to completely avoid or prevent risk, there are often ways to remove harm or to rescue individuals who have come to harm. No such interventions are likely to be available during long duration or exploration class missions where there are many challenges for rescue and immediate return to Earth may be difficult or indeed impossible.
Any consideration of health standards for long duration or exploration types of missions must be guided by the principle of avoiding, preventing, or removing harm, which involves doing all that is feasible through vehicle design, safety processes, protective technologies, and other risk mitigation strategies. Furthermore, as described below, the balance of risks and benefits must be appropriate, their distribution fair, and the decision making to participate by astronauts fully informed. In addition to considerations of harms caused to the astronauts participating in spaceflight, the harms considered should also include the potential threats to spaceflight as an enterprise and to NASA as an institution. Avoiding these types of harm provides additional motivation for risk reduction because short- or longer-term harms to the health of astronauts, potential damage or loss of spacecraft, or failure of all or parts of missions could cause serious and significant setbacks to the agency and the future of space exploration.
An ethics principle related to avoiding harm is the principle of providing benefit to others, often termed the principle of beneficence
(HEW, 1979; Beauchamp and Childress, 2013). It guides ethical behavior by promoting actions that provide benefit. This is especially important when there is risk of harm associated with the action being considered; the potential benefit is an important moral counterweight. The provision of benefit can be a clear obligation, as in the recognition of “good Samaritan” duties to come to the aid of others. In other contexts, doing good for others is commendable but not necessarily obligatory.
While there are clear distinctions between the moral status of a duty to refrain from knowingly causing harm and the articulation of a duty to provide benefit, they are not always easily separated. For example, sometimes the benefit can only be realized when accompanied by the risk of harm. Medical treatment is rife with such tensions (e.g., how much of a toxic chemotherapy agent can be tolerated by a patient with cancer; when is it acceptable to undertake risky surgery in the interest of treating illness, or in attempting to relieve suffering). As noted later in this chapter, the balance between risk and benefit is a key feature in the process of setting ethically appropriate health standards by NASA.
Putting the Principle in Context
The principle of beneficence has a longstanding place in the ethics of medicine and the physician-patient relationship. In much of medicine, the harms experienced in the course of treatment are expected to be offset by the benefits realized by the same individual. Not so in examples from other sectors, including many occupational settings where workers experience significant risk for benefits to be realized by employers, companies, stockholders, or society. Firefighters and police officers routinely put their lives and health at risk for the benefit of others and society. Military service members often face significant risks to their health and life for the benefits of national defense and protecting the nation’s freedom and values, putting themselves at risk as a function of their professional duties and their obligations to each other and to their country. The benefits sought are often valuable and important, and their realization carries institutional responsibility—both for assuring that they are achieved and for minimizing the potential harms, costs, and negative consequences that come with them.
Focusing on the Principle in the Context of Spaceflight
Although individual astronauts have personal motivations and may receive benefits to participate in spaceflight, the benefits of human spaceflight, for the most part, accrue to society through technological and scientific advances, as well as to national and international pride and collaboration. NASA is founded on a mission to realize the benefits of space exploration, as noted in its vision statement, “To reach for new heights and reveal the unknown so that what we do and learn will benefit all humankind” (NASA, 2013a). In addition to the benefits realized by the space program, research and development sectors, and society, long duration and exploration missions may also yield information that will benefit the health of astronauts who participate in future missions.
Beneficence in the context of these missions entails planning, implementing, and following up on missions in ways that maximize the attainment of the multiple benefits. As noted below, satisfying the principle of beneficence cannot be considered in isolation, as only a function of the magnitude of the benefits obtained; rather, consideration must also be given to how those benefits are distributed. The principle of fairness clearly has a role to play, in that opportunities for participation in spaceflight and the distribution of the benefits achieved be distributed and shared equitably. NASA has put extensive effort into detailing, analyzing, and articulating the risks of long duration and exploration spaceflight; similar efforts are needed regarding the valuation of the benefits of these missions, as decisions related to the ethics of exceeding risks allowed by current health standards will need to weigh both.
FAVORABLE BALANCE OF RISK AND BENEFIT
There are many realms where individuals or groups are asked to take on health risks for the benefit of others, such as military service and emergency rescue work (see Chapter 4). Many areas of discovery and innovation involve risk for the benefit of others, including drug trials and test piloting. For these activities to be ethically justified, the value of what is achieved in their pursuit must morally redeem what potentially could be lost. This notion is often encapsulated in the principle of favorable balance of risk and benefit.
Individuals frequently make decisions about tradeoffs of risk and benefit as they contemplate whether to engage in behavior or activities
that could result in harm to self. For some individual-centered decisions, there are few society-imposed restrictions for avoiding harm. However, there are many other circumstances where there are strong ethical imperatives to ensure that risk is favorably balanced with benefit. One of these is where individual risk taking imposes involuntary risk on others. Examples include laws about alcohol use and driving, restrictions on smoking in public places, environmental regulations, and occupational safety standards. Other areas where risky activities are undertaken in the name of some broader social objective include military service, rescue work, and participation in highly novel medical interventions (e.g., immunotherapies, cell-based interventions). Here, society (including members of the public, governmental institutions, and private-sector entities) has obligations to exercise good judgment in recruiting, training and informing, and engaging participants. Governments, in particular, are expected to ensure that the potential sacrifice of a sanctioned activity is adequately balanced by the benefits.
Space exploration entails risk to others, and it requires state approval of risk. It involves the former because, as noted previously, many other stakeholders are potentially affected by risks taken by individual astronauts, including fellow crew members, NASA, contractors, and any individuals and entities that benefit from the sustained enterprise of space exploration. Space exploration is state-sanctioned insofar as NASA is a public agency, and pursues exploration in the name of public interests.
Promoting a favorable balance of risk and benefit involves several activities:
- Systematic assessment of risk and benefit: This entails as thorough an accumulation of evidence as is feasible regarding risks and benefits, a systematic appraisal of that evidence, and a process for aggregating disparate forms of both quantitative and qualitative evidence. Risk assessment is greatly aided by deconstructing risk problems into their components and evaluating each. This includes systematically assessing long duration and space exploration risk components such as launch, microgravity exposures, radiation exposures, etc. Risk assessors must also bear in mind, however, that risk components can interact in ways that are additive or synergistic, and systematic risk assessment is only complete once cumulative risks are estimated.
- Minimizing risk: Standard setting and risk decision making should measure an activity’s risk against feasible alternatives.
Activities are unacceptably risky when other safer means are available to accomplish the same objective. For example, a mission might be redesigned, more shielding countermeasures applied, crew members selected who are less susceptible to risks, and labor/work flows divided to minimize risk exposures.
- Moral evaluation of residual risk: There are simply no objective or value-free ways of deciding whether a given risk-benefit balance is favorable, whether in space exploration or in other risky domains such as medical research or military activities. Instead, risk-benefit determinations are a direct and explicit expression of the many values and beliefs that surround an activity. Qualities that should be inherent in risk-benefit decision-making processes and judgments are detailed below.
- Monitoring and timely revision: As any activity unfolds, events can alter a risk-benefit balance in unanticipated ways. For instance, a technical failure may impair the ability of a mission to accomplish its objectives, thus greatly diminishing the risk-benefit balance. A health event might occur (e.g., an injury) that greatly worsens risks for an astronaut. A new discovery may obviate the need for a continued mission. As much as possible, even for missions already under way, mechanisms should be designed so that events that might alter a risk-benefit balance are monitored to the extent possible, and missions revised accordingly.
Putting the Principle in Context
Decisions about balancing risk and benefit are informed by extensive research and management guidance developed through the fields of risk assessment and risk management. The steps of hazard identification, risk assessment, and decision making are continually refined, as new information becomes available.
For industries with known potential for exposures to hazardous materials, such as the chemical industries, nuclear energy, and drug development, quantifying and reducing the risks is paramount. Research, surveillance, and worker protection regulations have resulted in detailed risk profiles, permissible exposure limits, and preventive and mitigation measures. Policies and regulations aim to address worker protection in the event of even the most catastrophic conditions (e.g., nuclear power plant accidents).
For many occupations, such as rescue work, risks are harder to anticipate and quantify. Here, a crucial risk-mitigation strategy involves developing protocols and decision frameworks so that risk-benefit decidecisions can be informed by systematic moral and scientific thinking. Other realms encountering unquantifiable risks employ values, such as precaution, or decision rules, to promote non-arbitrary decision making.
Focusing on the Principle in the Context of Spaceflight
Health standards for long duration and exploration spaceflight, as well as decisions concerning specific missions, should demonstrate a favorable balance of risks and benefits. Balancing of risk and benefit could lead to establishing a high ceiling on acceptable risks for astronauts, especially when the endeavor involves pressing state interests and when astronauts have assumed risks voluntarily (see section on Informed Decision Making in Chapter 6). Yet, there are good grounds for establishing limits that also derive from the values and beliefs held by astronauts and members of the public. Risks to be considered should include those that affect the welfare of astronauts, as well as the enterprise of space exploration. Benefits to be considered should include those expected to accrue to society, as well as to future space travelers. Acceptable risk-benefit judgments should take into account both quantifiable risks and benefits, as well as those not quantifiable given current knowledge. Risk-benefit judgments should demonstrate the following qualities (Fischhoff et al., 1981):
- Rationality: Decisions should be grounded in evidence—where it is available—and logically joined with values and preferences.
- Systematicity: Decisions should be based on as thorough an accumulation of evidence as possible. Much of this evidence will be based on observational and experimental studies of risk, but evidence also can derive from other sources. For example, knowledge about the typical shortcuts users may use to apply a technology, or how risks will be experienced by different stakeholders, can help in estimating risk. In addition, it is important to have a good understanding of the values different stakeholders hold when the risks and benefits of an activity are being considered.
- Explicitness: Decision makers should strive to make clear the values and assumptions supporting their decisions, the evidence and reasoning used to reach them, and the uncertainties sur-
rounding such evidence. In the case of long duration and exploration space missions, decision makers need to render as explicit as possible not only the risk estimates, but also the nature and value of missions, the viability of alternative and safer strategies for achieving these ends, and why pursuit of those values is sufficient to redeem the health hazards associated with missions.
- Coherence: Risk acceptability judgments for long duration and exploration spaceflight should strive to be consistent with those in similar activity domains that are deemed to render morally and politically sound risk judgments. Deep sea diving and polar science missions, as discussed elsewhere in this report, provide useful examples.
- Responsiveness: Acceptable risk standards embed and express deep-seated values surrounding an activity, and acceptable risk determinations should strive to be responsive to the values of stakeholders who perceive their interests as implicated by the standards. A large body of evidence exists, for example, that publics are often less tolerant of risks that are associated with dread, uncontrollability, and surprise (Slovic, 1987; Sandman, 1989).
In some respects, long duration and exploration space missions have properties that render the public more willing to countenance high risks; spaceflight is perceived as a socially valuable activity and astronauts assume risks voluntarily. In other ways, however, these space missions may provoke more restrictive attitudes toward risk. For example, the public tends to be more cautious about risks that are perceived as unjustly distributed. This caution might apply to spaceflight missions because the majority of mission planning is done by individuals who do not assume risks themselves. Furthermore, the public may also be more restrictive about risks that implicate important values. In many ways astronauts are living embodiments of national and cultural aspirations and values.
- Political acceptability: As noted above, the public’s values may be in conflict or in flux. Advocates of occupational safety, for example, may have very different views of risk acceptability than astronauts themselves. Libertarians might champion the decision-making autonomy of astronauts, whereas paternalists may place an emphasis on the role of states to protect individuals from undue harm. Decisions and standard setting should strive to
consider, accommodate, and earn the confidence of credible skeptics who might question a particular decision, given their individual perspective.
RESPECT FOR AUTONOMY
The principle of respect for autonomy has been among the most fundamental principles of bioethics since the latter half of the 20th century (HEW, 1979; Faden and Beauchamp, 1986; Tauber, 2005; Beauchamp and Childress, 2013).2 In addition to being a basis for much analysis related to the ethically appropriate treatment of individuals in health care settings and in biomedical research, it is the basis of the many public policies that seek to foster and protect individual self-determination in diverse contexts, including civil rights, consumer rights, contract and tort law, and criminal defense doctrines (Smith, 1982; Feinberg, 1986; Raz, 1986; Dworkin, 1988; Shapiro, 1988; Kahn, 1992; Rawls, 1993; Fallon, 1994).3 This principle is rooted in respect for individual liberty and freedoms. These include the right to be left alone (a non-interference right), and most relevant for the discussion in this chapter, the right to make decisions for and about oneself without the unjustified interference of others.
Putting the Principle in Context
There are many contexts in which individuals are given the liberty to make decisions, including those that may entail the risk of injury to themselves. On a personal level, these contexts include contact and extreme sports, thrill-seeking activities, cosmetic piercing, and tattooing. On a community level, individuals often volunteer to serve as firefighters, rescue workers, or other vocations in which they put themselves at
2The Belmont Report (HEW, 1979) states, “Respect for persons incorporates at least two ethical convictions: first, that individuals should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection.”
3Fallon (1994) notes “[a]utonomy has been identified as a value underlying the constitutional protection of privacy, procedural due process, equal protection, and free exercise rights” and that “[a]utonomy also explains the antipaternalist sentiment generally dominating American law” (p. 876).
risk. Professionally, individuals join the military service or work in emergency situations or in jobs, such as mining, that involve higher than average risks. Many factors go into individual decisions about whether to accept higher degrees of risk including the availability and feasibility of alternative opportunities, the benefits and results of the endeavor, compensation, and safety plans.
In practice, law and ethics both recognize that there are situations where it is appropriate to restrict the liberty of autonomous individuals to consent to self-harming activities. In criminal law, a victim’s consent to be injured may not always provide a defense for the person who inflicted the injury (Law Commission of England and Wales, 1995). In contract law, courts may refuse to enforce a contract that is so detrimental or injurious that it is “unconscionable,” even if the contract was voluntarily entered (ALI, 1981). In tort law, courts recognize limits on individuals’ ability to consent to an actual injury or to the risk of injury (ALI, 1979).
Yet even in contexts where individual autonomy is highly valued, such as health care and biomedical research, there are ethical and legal restrictions on individuals’ powers of consent. Under federal regulations for the protection of human participants, individuals may only consent to research in which risks have been deemed appropriate, as determined by an Institutional Review Board (45 C.F.R. § 46.111(a)), thus limiting the individual’s autonomy to consent to riskier research. Although patients have wide latitude to consent to elective surgeries, law and medical ethics impose limits (Bergelson, 2007). Striking the right balance has been the purview of much of the scholarship in research ethics and the promulgation of regulations for oversight of research involving human participants (Faden and Beauchamp, 1986; National Bioethics Advisory Commission, 1999; Mastroianni and Kahn, 2001; Moreno, 2001; 45 C.F.R. § 46 Subpt A), and courts struggle to enunciate principles to delineate appropriate boundaries of individual consent in many contexts.
Many regulations and procedures attempt to quantify risks, whether loosely through standards that simply require risks to be minimized or, more precisely, through numerical limits on permissible exposures. Although quantitative limits on the risks to which people can consent are often challenging to determine or implement, they are often a practical necessity. Even if people are willing to undertake risks, institutions still bear a responsibility to minimize risk and not invite people into activities that present undue risk.
Autonomy is also expressed in the right of individuals to regulate the flow of information about themselves to others. The rights to privacy and confidentiality of health information derive from this principle (HEW, 1979; Faden and Beauchamp, 1986; Moskop et al., 2005; Beauchamp and Childress; 2013) and require mention because violation of these rights may not entail direct physical harms. A balance is often sought between individual autonomy and society’s need for access to health information that can contribute to research, public health, and transparent, evidence-based policy formation and standard-setting (IOM, 2009). Because individuals may have important interests on both sides of the balance (e.g., benefitting from research), privacy and confidentiality4 are not always accurately framed as being directly in conflict with data access and transparency. The challenges come in establishing limits while finding the balance in the flow of information so as to benefit individuals and society. The loss of privacy and confidentiality can cause psychosocial harm (e.g., embarrassment or stigmatization if sensitive medical information is released), potential economic harms (e.g., discrimination in employment or inability to purchase life insurance), or indirect physical harms (e.g., receiving suboptimal health care if concerns about privacy and confidentiality cause patients to withhold information from their health care providers).
Focusing on the Principle in the Context of Spaceflight
Considerations of the principle of autonomy are relevant to decisions about astronaut participation in a specific mission, operational planning, and the extent to which health information is collected and shared. Critical to an astronaut’s decision is the right to be fully informed regarding health and safety risks of long duration or exploration space missions. Astronauts are highly trained and knowledgeable individuals who are
4Although privacy and confidentiality often are used interchangeably, they refer to distinct concepts. As stated by the Institute of Medicine report on the Health Insurance Portability and Accountability Act Privacy Rule, “[P]rivacy pertains to the collection, storage, and use of personal information and addresses the question of who has access to personal information and under what conditions” (IOM, 2009, pp. 16-17). “Confidentiality, though closely related to privacy, refers to the obligations of those who receive information in the context of an intimate relationship to respect the privacy interests of those to whom the data relate and to safeguard that information” (IOM, 2009, pp. 17-18).
provided in-depth briefings and opportunities to participate in NASA’s risk management processes at many levels.
Preparing for future exploration missions will necessitate changes in the current modes and procedures for communications and interactions between crew members and ground control (NASA, 2010). Due to the nature and distance of future missions (e.g., missions to Mars), there are expected to be communication delays and associated technical difficulties that are not experienced on current missions. These communication delays may impact the quality of communications and team coordination in such a way as to require the crew to work semi-autonomously in order to maximize health and performance. Considerations will be needed regarding the extent to which individuals and teams have the discretion and freedom to conduct tasks, make decisions, solve problems, and carry out other general duties (Leach et al., 2005). It encompasses much more than the freedom to create one’s schedule, and outcomes of an autonomous environment are highly interdependent among team members. In the context of spaceflight, autonomy refers to the extent to which the crew will act independently from mission control to complete objectives and/or respond to complications/emergency situations when needed due to environmental conditions (i.e., distance), as well as the extent to which the crew will prioritize mission objectives and schedule activities (Reagan and Todd, 2008).
Concerning informational risks, data on astronauts’ health, both during and following missions, are potentially valuable resources for multiple purposes, including continuous learning about health risks (e.g., refining future health and safety standards for the benefit of future crews), internal assessment and quality improvement activities, and promoting transparency and public trust through evidence-based policies and decisions. Because there is only a small sample of participating astronauts, however, commonly used strategies for protecting individual privacy (e.g., de-identifying data) may not be effective. Even if overt identifiers are stripped, the data may be intrinsically identifiable if, for example, only two men were on a mission and two cases of post-mission prostate cancer were reported. Long-duration and exploration spaceflights present opportunities where access to health data can support activities with very high social value, but where privacy may be particularly difficult to protect. It is in the interest of current and future astronaut crews to enable appropriate uses of astronaut health data to aid continuous learning and improvement of safety standards and risk-mitigation strategies. However, stringent policies to protect the privacy
and confidentiality of sensitive individual health information are an essential precondition for such activities.
Astronauts have the ability to report inflight medical problems via a private medical conference. However, it may be possible to increase the information being collected and address potential problems of underreporting of health problems by astronauts, if an anonymous reporting mechanism was implemented and its use encouraged.
The principle of fairness (or more appropriately justice as fairness) requires that like cases should be treated alike, or alternatively stated, equals should be treated equally and unequals should be treated unequally. In various social contexts, the principle of fairness has application to the distribution of goods and risks (distributive justice), righting past wrongs (compensatory justice), or assuring fair processes (procedural justice). One application of this principle notes that individuals, and identifiable groups, should be treated the same if they are in the same or similar circumstances unless there is a reasonable and ethically acceptable rationale for treating them differently. Thus, treatment does not have to be exactly the same, but rather must be fair or equitable. For example, in biomedical research, it makes little sense to enroll women into a trial to assess the efficacy of a new drug to treat prostate cancer—equality of the opportunity to participate is not required. However, fairness (equity) requires that there be clinical trials to assess the efficacy of new drugs to treat cancers that affect only women.
Putting the Principle in Context
Considerations of distributive justice underlie decisions about how the risks and benefits of research involving human participants should be shared, especially when particular individuals or groups experience the risks of research without the prospect of realizing its potential benefits. For example, healthy individuals exposed in toxicological studies bear risks from adverse effects for benefits that primarily accrue to society at large. In contrast, the history of women’s exclusion from clinical research studies has forestalled significant benefits by reducing the available knowledge base for treatment of health conditions that affect women.
In addition, men were unfairly being burdened with the risks of such research (IOM, 1994).
Compensatory justice supports, for example, the inclusion of greater numbers of women in clinical research studies and the increased study of women’s health conditions, in response to this historical exclusion. Compensatory justice also supports the government’s provision of health benefits to veterans. As a nation, we ask those joining the military to undertake substantial risk, including the risk of death, for the benefit of the nation’s collective national security.
Last, procedural justice requires fairness in processes, for example, in decision making about dispute resolution and resource allocation. The U.S. judicial system was designed to provide fair decision-making processes for dispute resolution, with uniform procedures requiring notice and opportunity to be heard to ensure that each side can present facts in support of a position, and with a right to appeal a decision a party deems to be unfair. Another example of procedural justice involved public participation in resource allocation decisions. When the state of Oregon was faced with difficult decisions on provision of health services in light of limited resources, it established a series of engagement opportunities for citizens to provide input into that decision (Dixon and Welch, 1991; Fleck, 1994).
Focusing on the Principle in the Context of Spaceflight
As noted throughout this chapter, one important ethical challenge of exposing humans to the risks of long duration and exploration spaceflight is that the burden of the health risks associated with these missions falls to a limited number of astronauts and their families while the benefits of the proposed missions accrue primarily to future astronauts and more broadly to society. In addition to being a concern of appropriate risk-benefit balance, the appropriate risk-benefit distribution must also be considered. Asking individuals to accept great risk (either in likelihood or magnitude of harm) can be partially balanced by making a commitment to provide long-term health care and health monitoring as is done for military veterans (see further discussion in the next section on fidelity). Distribution of risks within the crew will need to be as fair as possible given that some jobs (e.g., those involving extravehicular activities) might put some team members at higher risk than others.
Concerns about fairness also focus on individual and group susceptibilities to risk, as well as to fairness issues in crew composition. Issues
have arisen specifically about risks of radiation-induced cancers being higher for women (see Chapter 3). Whereas historically concepts such as “concerns over reproductive issues for women” have been used to exclude women from certain occupations,5 the solution was, ultimately, not to exclude women but to make the workplace safer. A rational system would acknowledge individual susceptibilities across a wide spectrum of exposures and use the best available knowledge to ameliorate the effects. In the case of women’s participation in long duration and exploration space missions, excluding women from the early missions (as might happen, for example, due to sex differences in lifetime limits on permissible exposure levels to radiation [see Chapter 3]) creates an unfairness that will persist and become self-fulfilling over time. Less information will be available about the health effects of participation by women, leading to greater uncertainty about the risks to women as compared to men over the course of multiple missions and, in the process, undermining equitable opportunities for women to participate. Unless women participate in missions, important information cannot be collected, which affects the implementation of other important principles, most notably those that require avoiding harm and appropriately balancing risk and benefit.
Similar principles apply to NASA’s ongoing efforts to ensure diversity in crew selection and create true equality of opportunity to participate in long duration and exploration spaceflight. As more information emerges about biological markers for susceptibility to disease, it is possible that virtually every astronaut candidate will be found to have one or more marker of increased sensitivity. For example, one individual might be more sensitive to radiation, another to bone loss during microgravity, another to the psychological effects of isolation and confinement in close quarters, and another to visual changes associated with microgravity. Fairness requires a holistic consideration of those diverse sensitivities with special attention to equitable distribution of risks and benefits and equality of opportunity.
5For example, in the 1980s, a battery manufacturing company put in place a policy that denied women of child-bearing capacity the opportunity to work on jobs involving potential exposure to lead. The U.S. Supreme Court ultimately ruled that such policies violated Title VII of the Civil Rights Act of 1964 (Automobile Workers v. Johnson Controls, Inc., 499 U.S. 187 ).
In situations where risks are to some degree unquantifiable, uncertain, and unknowable, and so cannot be well managed in advance, the principle of fidelity has been proposed as a “promise to stand by after” (Zoloth, 2013). Those who consent to incur long-term health risks for society’s benefit are entitled to fidelity, reflected in society’s durable commitment to minimize any harms that emerge, whenever they emerge. This concept of fidelity or reciprocity resonates with the basic, widely shared understanding that it is unjust to allow “some people alone to bear public burdens which, in all fairness and justice, should be borne by the public as a whole.”6 As a practical matter, the public cannot physically share the risks that astronauts will bear. It can, however, share the costs and burdens of ongoing risk-mitigation efforts. The astronaut’s consent to participate in the face of uncertain risks gives rise to a mutuality of obligation, akin to the legal concept of “future consideration” in which one party’s performance gives rise to duties that are owed (Garner, 2009).
Putting the Principle in Context
In most cases, persons who consent to work in hazardous environments in terrestrial settings have the option to stop if risks become unacceptable. Ethical and regulatory frameworks for research involving human participants similarly recognize the right of those individuals who volunteered for the research study to withdraw consent to participate at any time (45 C.F.R. 46.116(a)(8)). When consent is revocable, there is a reduced ethical imperative to define clear substantive duties that are owed to those who agree to participate; withdrawal is nearly always an option if they grow dissatisfied with how they are treated or find the conditions of participation unacceptable. Of course, there is a duty to avoid harm and minimize the risk of research protocols. Beyond this general requirement, there has been discussion that the rights of human research participants imply duties on the part of research sponsors and investigators (Faden and Beauchamp, 1986). However, it has been difficult to forge consensus about concrete, substantive duties—such as a duty to provide care or reimburse costs for research-related injuries—that sponsors and investigators may owe research participants.
6Armstrong v. United States, 364 U.S. 40, 49 (1960).
The nonbinding nature of research participation has been seen as blunting the need to develop a concept of fidelity that imposes binding, affirmative obligations owed by research sponsors and investigators to a consenting research volunteer.
Focusing on the Principle in the Context of Spaceflight
An astronaut’s consent becomes binding and irrevocable at the moment the mission launches. Astronauts are free to withdraw their agreement to participate prior to launch, but from the launch moment forward, it becomes near impossible to turn back, and astronauts likely will encounter uncertain and unquantifiable risk exposures and endure potential harms to health that will persist after the mission.
The irrevocability of participation, once begun, in long duration and exploration spaceflight creates an ethical imperative to define long-term duties owed to the participating astronaut. This is a necessary corollary of the ethical principle of avoiding or removing harm, and can be further supported by the principle to provide benefit. In this context the principles support the minimization of risk of harm, the treatment of injuries or health conditions during the flight, and the ongoing monitoring and provision of health care after the flight. This binding duty to provide ongoing surveillance, monitoring, and health care during the lifetime of the astronaut is part of the continuum of risk management that begins with engineering and design efforts to minimize risk and continues through the flight and postflight.
Recommendation 2: Apply Ethics Principles to Health Standards Development and Implementation
NASA should apply the following ethics principles in the development and implementation of its health standards for decisions regarding long duration and exploration spaceflights:
- Avoid harm—the principle includes the duty to prevent harm, exercise caution, and remove or mitigate harms that occur. Thus, NASA should exhaust all feasible measures to minimize the risks to astronauts from long duration and exploration spaceflights, including addressing uncertainties
through approaches to risk prevention and mitigation that incorporate safety margins and include mechanisms for continuous learning that allow for incremental approaches to risk acceptance.
- Beneficence—the principle to provide benefit to others. NASA should consider in its decision making the potential benefits of a specific mission, including its scientific and technological importance, as well as its potential beneficiaries including current and future astronauts and members of society at large.
- Favorable balance of risk and benefit—the principle to seek both a favorable and acceptable balance between the risk of harm and potential for benefit. In authorizing long duration and exploration activities and in approving particular missions, NASA should systematically assess risks and benefits and the uncertainties attached to each, drawing on the totality of available scientific evidence, and ensuring that benefits sufficiently outweigh risks.
- Respect for autonomy—the principle to ensure that individuals have both the right to self-determination and processes in place to exercise that right. NASA should ensure that astronauts are able to exercise voluntariness to the extent possible in personal decision making regarding participation in proposed missions, that they have all available information regarding the risks and benefits of the proposed mission, and that they continue to be apprised of any updates to risk and benefit information throughout the mission.
- Fairness—the principle requires that equals be treated equally, that burdens and benefits be distributed fairly, and that fair processes be created and followed. NASA’s decision making surrounding missions should explicitly address fairness, including the distribution of the risks and benefits of the mission, crew selection, and protections for astronauts after missions.
- Fidelity—the principle recognizes that individual sacrifices made for the benefit of society may give rise to societal duties in return. Given the risks that astronauts accept in participating in hazardous missions, NASA should respect the mutuality of obligations and ensure health care and protection
for astronauts not only during the mission but after return, including provision of lifetime health care for astronauts.
ALI (American Law Institute). 1979. Restatement of the law second, torts. Vol. 4. Philadelphia, PA: ALI.
ALI. 1981. Restatement of the law second, contracts. Philadelphia, PA: ALI.
Aristotle. 2009. The Nicomachean ethics, edited by L. Brown, translated by W. D. Ross. New York: Oxford University Press.
Beauchamp, T. L. 2003. A defense of the common morality. Kennedy Institute of Ethics Journal 13(3):259-274.
Beauchamp, T. L., and J. F. Childress. 2013. Principles of biomedical ethics. 7th ed. New York: Oxford University Press.
Bergelson, V. 2007. The right to be hurt: Testing the boundaries of consent. George Washington Law Review 75:165-236.
Dixon, J., and H. G. Welch. 1991. Priority setting: Lessons from Oregon. Lancet 337(8746):891-894.
Dworkin, G. 1988. The theory and practice of autonomy. Cambridge, UK: Cambridge University Press.
Faden, R. R., and T. L. Beauchamp. 1986. A history and theory of informed consent. New York: Oxford University Press.
Fallon, R. H. J. 1994. Two senses of autonomy. Stanford Law Review 46(4):875-905.
Feinberg, J. 1986. Harm to self: The moral limits of the criminal law. New York: Oxford University Press.
Fischhoff, B., S. Lichtenstein, P. Slovic, S. L. Derby, and R. Keeney. 1981. Acceptable risk. Cambridge, UK: Cambridge University Press.
Fleck, L. M. 1994. Just caring: Oregon, health care rationing, and informed democratic deliberation. Journal of Medicine and Philosophy 19(4):367-388.
Garner, B. A. 2009. Black’s law dictionary. 9th ed. Eagan, MN: West Publishing Company.
Gert, B. 1998. Morality: Its nature and justification. New York: Oxford University Press.
HEW (Department of Health, Education, and Welfare). 1979. The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html (accessed January 2, 2014).
IOM (Institute of Medicine). 1994. Women and health research: Ethical and legal issues of including women in clinical studies, Volume 1. Washington, DC: National Academy Press.
IOM. 2001. Safe passage: Astronaut care for exploration missions. Washington, DC: National Academy Press.
IOM. 2009. Beyond the HIPAA privacy rule: Enhancing privacy, improving health through research. Washington, DC: The National Academies Press.
Kahn, P. W. 1992. Legitimacy and history: Self-government in American constitutional theory. Binghamton, NY: Vail-Ballou Press.
Kant, I. 1998. Groundwork of the metaphysics of morals, edited by M. J. Gregor. Cambridge: Cambridge University Press.
Law Commission of England and Wales. 1995. Consultation paper no. 139: Consent in the criminal law. http://www.bailii.org/ew/other/EWLC/1995/c139.pdf (accessed February 26, 2014).
Leach, D. J., T. D. Wall, S. G. Rogelberg, and P. R. Jackson. 2005. Team autonomy, performance, and member job strain: Uncovering the teamwork KSA link. Applied Psychology 54(1):1-24.
Mastroianni, A., and J. Kahn. 2001. Swinging on the pendulum. Shifting views of justice in human subjects research. Hastings Center Report 31(3):21-28.
Mill, J. S. 1879. Utilitarianism. 7th ed. London: Longmans, Green, and Co.
Moreno, J. D. 2001. Goodbye to all that: The end of moderate protectionism in human subjects research. Hastings Center Report 31(3):9-17.
Moskop, J. C., C. A. Marco, G. L. Larkin, J. M. Geiderman, and A. R. Derse. 2005. From Hippocrates to HIPAA: Privacy and confidentiality in emergency medicine—Part 1: Conceptual, moral, and legal foundations. Annals of Emergency Medicine 45(1):53-59.
NASA (National Aeronautics and Space Administration). 2010. Research and technology development to support crew health and performance in space exploration missions. NASA Research Announcment NNJ10ZSA003N. http://www.oorhs.pitt.edu/Documents/EO_2010_08_02_002.pdf (accessed January 31, 2014).
NASA. 2013a. NASA’s vision. http://www.nasa.gov/about/index.html (accessed January 3, 2014).
NASA. 2013b. Astronaut candidate class. http://www.nasa.gov/astronauts/2013astroclass.html (accessed February 6, 2014).
National Bioethics Advisory Commission. 1999. Research involving human biological materials: Ethical issues and policy guidance. Vol. 1. Pp. 71-72. https://bioethicsarchive.georgetown.edu/nbac/hbm.pdf (accessed February 6, 2014).
Rawls, J. 1993. Political liberalism. New York: Columbia University Press.
Raz, J. 1986. The morality of freedom. New York: Oxford University Press.
Reagan, M. and B. Todd, 2008. “Autonomy” strategies and lessons from the NEEMO project. Houston, TX: NASA Johnson Space Center.
Sandman, P. 1989. Hazard versus outrage in the public perception of risk. In: Effective risk communication: The role and responsibility of government and nongovernment organizations, edited by V. T. Covello, D. B. McCallum, and M. T. Pavlova. New York: Plenum Press. Pp. 45-49.
Shapiro, D. L. 1988. Courts, legislatures, and paternalism. Virginia Law Review 74(3):519-575.
Slovic, P. 1987. Perception of risk. Science 236(4799):280-285.
Smith, R. M. 1982. The constitution and autonomy. Texas Law Review 60(2):175-205.
Tauber, A. I. 2005. Patient autonomy and the ethics of responsibility. Cambridge, MA: MIT Press.
Zoloth, L. 2013. Uncertainty, testimony, and fidelity: Ethical issues in human space exploration. PowerPoint presented at the second meeting of the IOM Workshop Committee on Ethics Principles and Guidelines for Health Standards for Long Duration and Exploration Spaceflights, Washington, DC, July 25. http://www.iom.edu/~/http://www.iom.edu/~/media/Files/Activityhttp://www.iom.edu/~/media/Files/Activity%20Files/Research/HealthStandardsSpaceflight/2013-JUL-25/Panel%203%20-%20Zolof%20FINAL%20FINAL%20Ethical%20Issues%20in%20Human%20Space%20Explo.pdf (accessed February 6, 2014).
This page intentionally left blank.