Life and Death: More Than an Expert Opinion
Suppose you were a state health director and you had two proposals before you to spend $10,000. The first proposal would provide a coronary artery bypass operation for one patient; the second would immunize 5,000 children at a cost of two dollars each.
Which would you choose?
What if the choice were between a high school education program for preventing teenage pregnancy and a helicopter ambulance service for sick babies? There is no objective "right" answer in cases like these. You can commission an analysis of each service to determine what it will cost and how many lives it will save, but deciding how to apply the data depends on your values.
Unfortunately, we Americans do a poor job of confronting such choices, much less making decisions about them. We wage fierce debates about weapons programs, taxes and other issues, but when it comes to life-and-death choices involving health care, the silence is deafening.
Perhaps people believe we are able to spare no expense as a society to save lives. Yet choices are inescapable, as we Oregonians learned not long ago when a local boy died from leukemia. The bone marrow transplant he needed was disallowed by state Medicaid officials because the money was earmarked to provide prenatal care to the poor.
Similar choices exist nationwide. Is it right for hospitals to offer luxury rooms and gourmet meals for some patients while 37 million other Americans lack adequate health insurance? Should money from community health programs be diverted to provide expanded medical services for AIDS patients? Are old people entitled to organ transplants?
In a democratic society, value-laden decisions like these should be made by the people or by their elected representatives. Yet, with the notable exception of abortion, Americans generally have had little to say about such issues. They leave the choices to physicians and government officials or, more often, allow them to be resolved by default.
The irony is that Americans have become much more assertive about making life-and-death decisions affecting themselves or their families. Many cancer patients, for example, now insist on being the ones to choose whether they will undergo intensive chemotherapy. Some families of brain-dead patients demand the right to decide whether to keep the person alive with medical technology.
Of course, one of the main reasons Americans are more vocal about these kinds of decisions is that they are affected so personally and directly. Yet it also is true that larger social issues are rarely framed in a way that invites public discussion. Most citizens lack the information or forum to grapple with these questions, much less to help decide them.
An emerging grassroots "health decisions" movement in several states demonstrates that citizens can get involved constructively. Citizens' groups in these states have begun discussing as a community many of the tough decisions that burgeoning medical technology has forced upon us. In some cases, they have affected official policy.
For example, Oregon Health Decisions, a citizen-based organization, trained a cadre of 75 homemakers, insurance salesmen, firemen and others in the language of bioethics and health-care decision making. Members of the group then held 300 meetings across the state to talk with other citizens about such issues as death with dignity and equitable access to health care. In response, the state legislature established by law a process that incorporates the health values of citizens in these critical decisions.
One result has been that Oregon has adopted some explicit policies about where it will spend its resources; prenatal care, for example, comes before organ transplants. Some critics have attacked this approach as "health rationing" and called it a dangerous departure from the concept that society should try to provide optimal health care for everyone.
However, I think this new movement is welcome. Our country now spends a far greater percentage of its gross national product for health services than any other nation, yet it has not bought itself a commensurate amount of better health. Many American citizens have severe difficulty obtaining basic health services. Choices do exist, even if we prefer to pretend otherwise. We need to confront these decisions squarely.
November 19, 1989
Ralph Crawshaw, clinical professor of psychiatry at Oregon Health Sciences University, is on the board of directors of Oregon Health Decisions.
* * *
The New Diagnostics and the Power of Biologic Information
Dorothy Nelkin and Laurence Tancredi
A company considers enrolling one of its brightest young female employees in an expensive training program. Before making the investment, it asks the woman to undergo biological testing with the latest diagnostic techniques, including genetic screening.
The tests reveal an unexpected problem. The woman carries the gene for Huntington's disease, a degenerative, invariably fatal brain disorder. Although she now seems in
good health, she is likely to begin developing symptoms of the illness at about age 40.
Faced with the prospect of paying the huge medical bills, the company decides not only to withdraw the woman from the training program, but to dismiss her entirely.
Situations such as this are becoming possible as new biological tests emerge from the laboratory. Designed to uncover latent problems or predict future diseases, these diagnostic techniques offer potentially valuable clinical applications. Physicians can apply them to identify potential problems and to recommend therapy or preventive action with greater speed and confidence. New brain-imaging methods can be used to diagnose potential behavioral disorders, learning disabilities and psychiatric illnesses. Genetic tests can help identify patients with a predisposition to hereditary diseases and complex disorders suspected of having a genetic component, such as mental illness, Alzheimer's and susceptibility to alcoholism.
A physician might use this information to improve the quality of medical care by warning someone with a genetic predisposition to heart disease to eat certain foods or by cautioning someone prone to certain cancers to receive more frequent checkups. However, the new techniques also may
find their way into contexts where they provide unprecedented threats to our traditional concepts of privacy and personal autonomy.
For example, schools, employers, insurers and the courts, concerned about controlling costs or improving efficiency, could use the tests to learn about the health status and behavior of their clients. Such information could limit access to insurance or health-care facilities or exclude high-risk individuals from jobs or training programs.
Within the legal system, new tests can be used to enhance certainty in controversial decisions, such as whether to sentence someone to prison. Legal scholars writing on biological psychiatry predict that courts increasingly will use information from brain scans to evaluate the sanity of criminal defendants and to predict the likelihood of future dangerousness.
Grounded in science, the new diagnostic technologies are compelling. Images on a screen appear precise, and statistical findings processed by computers seem value free. Yet the information produced by most tests is only inferential. Interpretation rests on statistical definitions of "normal" and may assume a cause-and-effect relationship where there are only correlations. The error rate may be high. Moreover, even if tests reliably anticipate who is susceptible to a disease, they cannot predict the age of onset, the severity of expression or the influence of intervening factors.
Despite these limits, biological testing is a growth industry. Biotechnology companies hope that most people will have genetic profiles on record by the year 2000 and that testing will be mandatory in many organizations.
The implications are profound. The refinement of tests already is expanding the number of disease categories and the number of people judged deviant. Just as improved sensitivity in the technologies used to test food products has expanded the number of products identified as carcinogenic, so are improved diagnostics increasing the number of people defined as abnormal.
Predictive testing thus opens possibilities for new forms of stigmatization and discrimination and encourages social policies based on biological criteria. Indeed, some asymptomatic
people suspected of having a genetic disease already have been barred from insurance or employment. One can imagine families demanding information about the biological status of relatives, adoption "brokers" probing the genetic history of babies or commercial firms storing genetic profiles and selling the information to insurers or employers.
Considering the rapid development of the new diagnostic technologies, much more discussion is needed about critical questions of privacy, access to information or the potential for abuse. As tests extend the range of what we can predict, we should not assume their moral neutrality. The technology has the potential to serve society well and save many lives. However, its indiscriminate use could define more and more people as unemployable, uneducable and uninsurable—creating, in effect, a biologic underclass.
February 4, 1990
Dorothy Nelkin from New York University and Laurence Tancredi from the University of Texas Health Science Center are the authors of Dangerous Diagnostics: The Social Power of Biological Information.
* * *
Harvesting Organs from Anencephalic Infants
Alexander Morgan Capron
Every year several hundred children in the United States are born, alive, with anencephaly—that is, without cerebral hemispheres and the top of the skull. Some physicians have asked: why not donate organs from these babies to
save the lives of other children, since an anencephalic child will die soon anyway and cannot feel pain?
In response, legislators in several states have proposed laws allowing hearts, kidneys and other organs to be removed from these infants before they meet the usual criteria for death.
This may appear to be a merciful approach to a tragic situation. Yet it is likely to fail and could jeopardize other transplantation programs that have a much greater chance of success. More basically, it threatens the inviolability of human life, producing moral confusion that outweighs any possible benefit.
The idea of relying on anencephalic newborns as an organ source is not new, but it has been seriously pursued only in recent years as transplantation methods for young children have improved.
The first problem with it is simply that the supply of usable organs is likely to be meager. Thanks to prenatal screening and other medical advances, the number of anencephalic births has been declining steadily. This amount would be further reduced because some parents refuse to allow transplantation or because the organs cannot be matched with a compatible recipient. In fact, a realistic annual estimate is that only nine hearts, two livers and no kidneys would be transplanted successfully.
Thus, the number of usable organs from anencephalics would remain far short of the need, even if laws were relaxed to allow transplants to occur before the infants met the standards now used to determine death in other persons. Those standards require the irreversible cessation of functioning in the brainstem as well as in the higher centers of the brain. Yet waiting until an anencephalic's brainstem ceases functioning usually renders the vital organs useless for transplantation because those organs are damaged by the same process—episodic cessation of breathing—that causes the brainstem to fail.
One suggestion to overcome this problem—allowing organ harvesting before death—may be well-intentioned, but is also chilling. It says homicide should be lawful to save
the life of another whose life is not threatened by the person killed, provided the latter is about to die anyway.
But if these principles apply to anencephalics, then they are also valid for other patients. Why not allow relatives to donate organs from patients in the final stages of any terminal illness or from prisoners on death row? Since anencephalics won't supply enough organs, this utilitarian argument could push society to extend the category to children with other terminal diseases to benefit babies awaiting transplants.
Some proponents of using anencephalic infants as organ donors argue that because these children lack consciousness, they are not persons, which eliminates the main ethical problem in killing them. Yet research on both humans and animals suggests that a child with only a brainstem may experience more sensation than physicians traditionally assumed.
Another rationale is that the large majority of anencephalic infants die within a few days anyway. Yet society must scrupulously maintain the distinction between ''near death'' and death, even if this requires forgoing some organs. Otherwise, the public may suspect that physicians change the rules for transplantation whenever it suits their needs.
Finally, anencephalics who are maintained medically for possible transplantation turn out not to die as rapidly as surgeons require in order to perform the procedure successfully. Indeed, at one transplant center, life support was withdrawn from "possible donors" after one week, but several survived for weeks longer. Also, the criteria for diagnosing when death actually occurs never have been validated for newborns, especially ones with neurological problems of the type anencephalics have.
Given the many problems involved, using anencephalic infants in this fashion is one "breakthrough" society should leave untouched. Parents of anencephalics should be offered the opportunity to donate their child's non-vital organs, such as corneas and heart valves, for transplantation after death, and to cooperate with postmortem investigations aimed at learning more about this tragic affliction. But encouraging them to donate vital organs from their infant will merely intensify their agony or involve them in a decision that they ought not be asked to make: whether one life should be ended to benefit another.
May 6, 1990
Alexander Morgan Capron is University Professor of Law and Medicine at the University of Southern California.
* * *
HIV Screening and the Calculus of Misery
A fierce debate has been taking place within the medical community about whether to test more—or all—pregnant women to see if they are infected with HIV, the virus that causes AIDS. Proponents say such screening would identify women and newborns who need special care while also providing valuable data about the course of the epidemic. Opponents worry about violations of privacy, discrimination and stigmatization.
A critical assumption underlies this debate, however, namely that those who test positive will be provided with the counseling, drugs and other services they need. Otherwise, why would anyone agree to be tested? This issue was discussed at a recent conference on HIV screening held by the Institute of Medicine of the National Academy of Sciences.
Some public health experts, alarmed about the continuing spread of the AIDS virus, argue that screening might have a positive effect by identifying more patients with HIV and increasing the pressure on government hospitals and clinics to provide services. In other words, it would drive the system to respond.
The assumption sounds reasonable—until one considers the plight of homeless families and the mentally ill wandering the streets of our cities.
When homeless people started appearing in large numbers in the 1980s, most compassionate Americans assumed something would be done to help them. In many cases, it wasn't. Similarly, many thousands with mental health problems were released from institutions beginning in the 1960s on the premise that they would be provided with alternative services, such as halfway houses. Instead, many ended up on the streets. Need has not created supply.
There is an established ethical principle in medicine that screening for a disease should not be done without first
assuring backup services. It is wrong to subject people to possible anguish and discrimination without offering them the benefits of treatment. Yet, in the face of society's policies towards the homeless and the mentally ill, as well as the current inadequacy of services for AIDS patients, one must question whether such services would be provided to all those who tested positive in an HIV screening program.
The potential benefits of screening are clear. Women who tested positive might benefit from receiving the drug AZT in some cases, and they could be informed that their unborn child has a one-in-three chance of being infected with the AIDS virus. Some might elect to terminate the pregnancy if, in fact, they could obtain an abortion. Others would be alerted to the need to watch their newborn closely for possible medical problems. Yet many—especially those who are poor and from minority groups—would have difficulty obtaining medical services of any kind. Our country's health care system is the most inequitable in the Western world, with 37 million Americans lacking health insurance.
Then there is the question of informed consent. Medical testing for HIV should be done only after the patient provides specific agreement to the test. Fortunately, as the result of aggressive action by white, middle-class gay men, this requirement has come to be accepted by many medical professionals who now deal with the AIDS epidemic.
Yet "informed consent" is easier said than done. Some clinicians who have little experience with HIV and are used to "presumed consent" by patients for diagnostic tests may have difficulty accepting the standard of specific consent, particularly as the epidemic expands into poor patient populations.
All this argues against implementing a widespread, routine HIV screening program for pregnant women. Yet such a conclusion leaves me uneasy. It makes screening appear sinister, which simply is not the case. Its potential value is considerable. Screening could inform many individuals about the need to obtain treatment and avoid infecting others, about the need to consider all reproductive options.
The real villain is the lack of services. Persons considering a test for HIV should be certain about confidentiality.
They must be sure that being diagnosed as seropositive will not lead to discrimination but to adequate medical care. Instead, physicians and patients alike are being forced to make excruciating choices about medicine, justice, life and death. It is a calculus of misery. Widespread screening is no substitute for providing adequate resources.
July 22, 1990
Ronald Bayer is associate professor at Columbia University School of Public Health and author of Private Acts, Social Consequences: AIDS and the Politics of Public Health.
* * *
Laboratory Experiments on Animals Should Continue
Presidential elections and other dramatic news may capture the headlines, but one of the most profound events of our time has been the dramatic increase in life expectancy. Americans now live about 25 years longer than they did a century ago.
That is an essential fact to keep in mind when one considers whether it is ethical to carry out experiments on animals, a controversy that has existed for more than a century. The debate has become more charged over the past decade, with some radical animal-rights activists breaking into laboratories to "liberate" animals.
A number of individuals have argued that progress in computer modeling and other technologies now makes it possible to test drugs, surgical techniques and other medical advances without using animals. The use of animals in experiments,
they say, can be reduced dramatically, if not eliminated. On the other side, most scientists warn that animal experimentation continues to be essential to biomedical research.
Who is right? I recently chaired a committee of the National Research Council that tried to answer this question. Our committee, which included experts in biology and generalists such as myself from unrelated fields, spent three years reviewing the evidence.
Our conclusion was that laboratory animals are likely to remain a critical part of human health-care research for the foreseeable future. Researchers have developed a number of experimental techniques that help reduce the need for animals; such efforts should continue to be encouraged. However, there is little chance these alternatives will eliminate the need for research animals anytime soon. Available models simply cannot recreate the complexity of real living systems.
Animals will continue to be needed by researchers seeking ways to cure, treat and prevent such health problems as cancer, AIDS, Alzheimer's disease and diabetes. In the past, animal experimentation resulted in vaccines to prevent polio, smallpox, measles and a host of other diseases. Other Americans owe their lives to kidney transplants, heart-bypass operations, the removal of brain tumors and CAT scans, all of which were tested for reliability on animals before being used with humans.
In other words, science must continue to use animals in experiments if one accepts the moral premise that humans are obliged to each other to try to improve the human condition. Most individuals would reject the contention of animal-rights advocates that it is "speciesism" to use animals in this process. The fact is that we humans are different from other animals. We are the only ones who appear able to make moral judgments and engage in reflective thought. For countless centuries we have used animals for food, fiber, transportation and other purposes. Experimentation is part of this tradition, and it is equally justifiable.
Of course, doing so carries with it a responsibility for stewardship. Chimpanzees, dogs, rats and other animals must be treated humanely. Researchers who do not follow
regulations that provide for this should be punished legally and subject to censure by their peers.
The great majority of the animals, however, are treated humanely. Some opponents of animal experimentation allege that abuse and neglect of animals are widespread in research facilities, but our committee could find only limited evidence to support them. Although continual review of the matter is warranted, abuse of research animals is the exception rather than the rule.
Another point of contention has been whether pound animals should be used in research. Twelve states have passed laws that impose prohibitions on doing so, even though fewer than 200,000 dogs and cats are released from pounds and shelters each year for use in scientific research. This is far fewer than the more than 10 million animals destroyed at pounds and shelters annually.
The result of these restrictions, unfortunately, has been to increase the demand for research animals from dealers. This causes more animal deaths overall and raises the costs of research.
Our committee did not examine whether it is ethical to use animals for other purposes, such as to test the safety of cosmetics. Our task was only to review the use of animals in biomedical and behavioral research. We concluded that this kind of research is necessary and of great service to humanity. It should continue.
November 29, 1988
Norman Hackerman is president emeritus of Rice University.
* * *
Integrity and Science
Arthur H. Rubenstein and Rosemary Chalk
A few cases of scientific fraud—in which scientists fabricated research data or plagiarized the work of others—have generated extensive publicity over the past decade. These cases have provoked public concern about the integrity of biomedical research and the ability of the academic research community to prevent future occurrences of misconduct.
Evidence of the extent of scientific misconduct is still lacking, as we learned while examining the problem recently for the Institute of Medicine of the National Academy of Sciences. Our study panel concluded that serious research fraud is most likely a rare phenomenon. Of greater concern are sloppy and careless research practices, which appear to be more common.
This carelessness includes practices such as listing one's name as a co-author of a publication without even reading it or failing to observe accepted research standards in conducting scientific experiments or interpreting findings. A researcher facing increased competition for a grant to continue a medical study, for example, may be more tempted than in the past to exaggerate the significance of preliminary findings.
Such actions, while not fraud, erode the integrity of science and contribute to an over-permissive research environment that fails to discourage more serious forms of misconduct. Given increased funding pressures and the current overemphasis on publication as the main means of achieving status in science, it is time for research institutions to act more forcefully to strengthen the professional standards for scientists.
Traditionally, the research community has guarded against misconduct or sloppy research in two ways. First, scientific papers are usually reviewed before publication by several experts in the field. A problem with this process of peer review, however, is that the reviewers must trust the raw data as presented by the researcher. It may be almost impossible to identify an author who fabricates data convincingly.
The other safeguard is the scientific tradition of repeating
experiments to verify another researcher's results. But in an era of complex and expensive research, replication may not occur for most experimental work.
Peer review and replication are still fundamentally sound as a way of assuring integrity in science. But they now need to be supplemented with more formal guidelines and professional standards. Research institutions must establish and enforce these rules on their own and ensure that they are effectively communicated to the next generation of scientists.
For example, research institutions should adopt explicit standards on such matters as how research data should be recorded and retained or who is to be included on the list of authors for research papers. To ease the "publish or perish" pressures that can lead to fraud, universities should join with scientific journals and funding agencies to limit the number of publications they will consider when making decisions on career advancement and research funding. The primary message to researchers should be that quality counts much more than quantity.
To make these changes work, research institutions need not only clarify their expectations about professional conduct, but also provide researchers and students with formal instruction about these rules. They should designate someone who is clearly responsible for promoting high standards of research within the institution and establish an effective procedure for taking action against those who violate them.
The federal government, for its part, needs to assume a more vigorous role in promoting responsible conduct in biomedical research. As the nation's primary funder of health-related research, the National Institutes of Health should take the lead by establishing an office with the responsibility of helping institutions develop the guidelines outlined here.
We oppose another possible federal action that has been discussed, namely random "audits" of investigators' laboratories by government officials. Not only would this expensive approach probably fail to detect fraud, but it could too easily damage the spontaneity and creativity that are essential to a productive research environment; the cure might be worse than the disease.
Adopting these specific rules of conduct will boost everyone's confidence that both honest mistakes and outright cheating will not go unnoticed. Best of all, they will enhance the productivity and reputation of the vast majority of researchers who remain honest and dedicated in their efforts to improve our national health.
March 28, 1989
Arthur H. Rubenstein, chairman of the Department of Medicine at the University of Chicago, chaired a committee of the Institute of Medicine that studied biomedical research misconduct. Rosemary Chalk was the committee's study director.
* * *