By Magdalina Gugucheva

“To educate a man in mind and not in morals is to educate a menace to society.” – T. Roosevelt

I stopped scribbling. Sinking back into my cool seat, I let my gaze drift away from the fluorescent power point projection and out into the calm darkness of the lecture hall. Class was nearly over.

“And . . .” Almost as an afterthought, the professor continued: “… these techniques aren’t just valuable when we want to create transgenic mice. The cool thing is that they are applicable in human genetics as well – for example, couples with a family history of cystic fibrosis can use the same procedure to select for healthy embryos when they go to have kids. Pretty soon, when we have a better understanding of human genetics, we’ll be able to select for a wide range of traits in our offspring…” RING. Class dismissed.

Hundreds of aspiring physicians and future biologists rose, slowly herding out of the lecture hall. I remember remaining in my seat. It was my first semester of college; I was eighteen years old, newly arrived in New York City’s Greenwich Village, and I was taking my first molecular biology class. Like the students around me, I was an eager and aspiring young scientist, enraptured with the marvels of human genetics. But that day’s lecture – maybe halfway through my first semester – gave me pause. How could the professor end a lecture like that? How could he be so cavalier – so naively optimistic – about a technology with such broad social and ethical implications? Wasn’t there more to say, to question, to discuss?

I remember looking at the faces moving around me – faces of future leaders in genetics and medicine – and not one appeared disturbed or puzzled. If we had the power, the technological knowhow, to intervene and improve the human condition – why, of course we would do it. But what does it mean to improve? What potential adverse consequences might await patients who choose to undergo these procedures—or those who don’t choose, but will live in a society where these practices are commonplace? Why was no one asking about this in my class? Why didn’t the professor acknowledge these questions and get his students to start asking?

Three and a half years later, I finished my undergraduate education in molecular biology without once encountering these debates in a biology classroom, despite the many technologies I learned about that would have merited such a discussion. Had I not developed an interest in bioethics on my own, I would have walked away under the impression that advances in genetics were undoubtedly the future of medicine, that genes and disruptions at the molecular level were the primary cause and treatment target for disease, and that every technological development in genetics held tremendous, unbridled benefits. I furthermore would have left my undergraduate education without any account of the dark history of the field of genetics. I would have never learned about the eugenics movement and its deep institutional ties to today’s research. I would have never questioned the method or virtue behind the allocation of money that was funding my professors’ research, and therefore the social value of their research. I would have never wondered whether real lives were being saved with those grant dollars, or whether limited government funds might better serve social goals if channeled elsewhere. These are not questions I purport to answer, but they are questions that deserve asking. And they deserve to be asked by and of those who are key players in technological advancement. Addressing ethical and social concerns is essential for training future leaders of any technological innovation, but it is especially critical in one so integral to human health and identity.

Given the current controversy over Berkeley’s genetic testing of incoming freshmen, I think the time has come to start examining the ways we educate future leaders in medicine and biotechnology. While bioethicists have heavily criticized Berkeley’s project “Bring Your Genes to Cal,” accounts from students who will likely subject themselves to the University’s testing program tell a different tale. These statements strike a tone of eager enthusiasm toward emerging technologies on the one hand that is paired with ambivalence and dismissal of ethical concerns on the other. One rising freshman quoted in The Daily Californian, Berkeley’s undergraduate newspaper, stated: “I’m totally for it. No one is forcing me to do it, and there’s no real downside I can see.”1 Like many students endorsing the project, he dismissed any privacy or coercion concerns. Instead, his statement exemplifies an excitement among members of his cohort that is utterly lacking in necessary apprehension or critique. Contrast this student’s position to the outspoken criticism from bioethicists. Are such polarized positions a product of fundamental ethical disagreements? Probably not. Rather, they demonstrate highly incongruent perspectives in education – where those on one side of the debate have been trained to ask an entirely different set of questions than those on the other.

Comments made by Dean Mark Schlissel, head of Berkeley’s Biology Department and one of the leaders behind “Bring Your Genes to Cal,” further illustrate this broad gap between science education and social and ethical policy education. In his response to backlash against the genetic testing program, he states that the “rapidity and energy” behind bioethicists’ reaction took him “by surprise.”2 No doubt it did – he’s not a bioethicist, he doesn’t spend his days questioning social implications of genetics research. When his statements met further reactions from faculty on campus who pointed out that he should have consulted more bioethicists in crafting a less ethically questionable project, he responded that Berkeley does “not organize educational programs by inviting all 1,500 professors to participate in the design.”3 In other words, how was he to know he should even seek out input on a program he didn’t have a clue would spark a debate? A trained scientist, he may have had little exposure to social concerns associated with emerging technologies. Yet he and his students will be in a position to make similar decisions – decisions about implementing new technologies, initiating novel studies, endorsing changes in medicine and healthcare – without ever thinking to seek ethical input, social policy input, or democratic input. Schlissel’s failure to spot and address ethical issues in this program highlights a major gap in the current mode of educating future scientists.

Wondering whether I was really right—whether part of the problem in the polarized debates over regulation of new technologies was merely an issue of exposure in education—I undertook a short survey of biology-related undergraduate curriculum at the top twenty-five national universities in the U.S.4 Of those twenty-five, only ten universities offered majors in subjects that explored the relationship between science, technology and society (STS) and/or bioethics. Three more schools offered only minors in bioethics-related subjects. Just six colleges required at least one course in bioethics or STSrelated subjects of some biology-related majors – but not one school required such coursework of all students majoring in the biological sciences. Furthermore, nine of the top twenty-five undergraduate universities did not even offer a single STS or bioethics-related course – not one – as at least an elective that would count toward any biological science major.

To be fair, bioethics courses at these schools might be offered through other departments. Such courses can always be sought out by students who already have an interest in bioethics or STS, and they can always be taken as a general elective. That’s what I did in undergrad. But I also had to jump through quite a few hoops: the two courses I took were offered as sociology department seminars available to majors only, and so I had to persuade professors to let me enroll even though I was not majoring in sociology. In other words, I had to have a pre-developed interest strong enough to individually pursue exposure to bioethics and STS issues.

But the point of educating future researchers in ethics is not to help them explore issues they already have questions or concerns about; it’s about expanding students’ understanding of their field, their assumptions and the values they take for granted. Students should be shown how to question and scrutinize the premises of their work. That reflection, paired with a broader understanding of one’s role in a society – beyond that the immediate professional role in the narrow context of one’s career and his immediate peers – this is what liberal arts education should be about. Yet while colleges in the United States have largely expanded science and math education, requiring humanities majors to complete more math and science courses, they have utterly failed to provide that same breadth of perspective to future scientists. In a world increasingly shaped by genetics research and technology, might this gap not bring potentially disastrous consequences?

Look at business and finance. In the wake of the financial crisis, business schools have scrambled to expand their ethics curricula. It might have taken an economic disaster, but educators realized that ethics education is critical to training future leaders – leaders who will undoubtedly wield tremendous power and make decisions carrying broad social consequences. Yet, when the novel financial instruments many blame for the financial meltdown were being innovated, no one was thinking about educating business students in corporate and social responsibility. Ethics was for the policymaker and the regulator, not the innovator. Innovation, however, has always raced ahead of regulation. At least in business, educators realized this meant an internal check was needed.

The parallels between finance and biotech are not hard to see. As education in these technical and complex fields becomes more specialized, and as expertise and segmentation within the profession grow, more and more ethical decisions simply evade regulatory oversight. Self-regulation becomes the only feasible way to put a check on questionable new technologies. Those whom we might require to self-regulate, however, are so energized and excited – intoxicated – with the progress of their fields that we can’t reasonably expect them to remember to hit the brakes. The problem is not unlike the intoxicating profits reaped by the banks when they innovated new ways to move money around, with no one motivated to question the ethics.

Does that mean, then, that we ought to wait for our own major crisis – perhaps an environmental disaster spawned by GM crops, or a major privacy and security breach from DNA databanks, or a health crisis relating to overzealous application of novel genetic therapies – before we can reexamine how we train the future leaders of bioscience?

Magdalina Gugucheva is a J.D. candidate at Harvard Law School and an intern with the Council for Responsible Genetics.

Search: GeneWatch
The Gene Myths series features incisive, succinct articles by leading scientists disputing the exaggerations and misrepresentations of the power of genes.
View Project
Genetic Determinism
View Project