View upcoming events at Boston College
Books by alumni, faculty, and staff
Order books noted in Boston College Magazine
Join the online community of alumni
View the current BCM in original format
Matter over mind
Do behavioral drugs make us better? Do they make America better?
Most psychiatric patients surrender to drug therapy—believing, in the end, that they have little choice—but they nearly always act as resistance fighters along the way. They fail to comply fully with doctors’ orders or they experiment with drugs or make repeated efforts to stop taking their pills altogether. At root is the sense that prescribed drugs can erode personal authenticity and tamp down feelings that reflect one’s true self.
Certainly, scientists and pharmaceutical companies should continue to investigate how brain lesions, faulty neurotransmissions, and flawed genes may shape thoughts and feelings that severely impair human functioning. But at the same time, by insisting that disorders like depression and anxiety are primarily caused by malfunctioning brains, the proponents and practitioners of biological psychiatry ignore clear evidence about the psychological and social factors equally at the core of human emotions. There is a great loss in that. As ever-increasing numbers of Americans take psychotropic pills, we all begin to believe that more and more of our feelings—sadness, shyness, anger—are illegitimate and abnormal and require biological intervention to correct.
Historically, problematic behavior was explained as evidence of sinfulness or evil. These days when people behave strangely or do things that impinge on our moral sensibilities, we immediately question their mental health. Presumably, this fairly new medical model of psychiatric disorder—increasingly dominant since the 1980s—is based on objective scientific criteria, thoroughly independent of moral judgments and political ideologies. But in conflating health with cultural conceptions of normalcy, an inseparable link between science and morality is nonetheless created. In the name of science, the medical model unjustifiably gives physicians the right to treat behaviors that contravene social expectations or upset widely held moral injunctions.
Examples of nonconforming behaviors that have been medicalized include hyperactivity in children, child abuse, and alleged shopping, sexual, and computer addictions. One can only express wonderment at the deluge of “discoveries” of new brain diseases in the last 25 years. The bible of psychiatric diagnoses, the Diagnostic and Statistical Manual of Mental Disorders, or DSM, has been revised three times since 1953—in 1969, in 1987, and in 1994. The 1953 and 1969 editions offered classifications that accorded with the psychodynamic model prevalent at their time, when conditions warranting treatment were understood to be disorders of the mind. Then, in 1987, the language abruptly shifted, and diseases of the brain became the new currency. Whereas in 1953, the DSM had named 60 psychiatric disorders, the current DSM describes over 350 diagnoses. Surely, a 480 percent increase in identifiable psychiatric abnormalities cannot have resulted solely from dispassionate scientific discovery.
Today, diagnoses in the DSM may echo the language of organic conditions like diabetes. But the analogy breaks down when we consider that no doctor would prescribe insulin replacement without first clearly demonstrating that a person’s body was not properly producing the hormone. Doctors have a definitive test to affirm the organic pathology that warrants the diabetes diagnosis and treatment. By contrast, no doctor has ever done a diagnostic test that demonstrates the brain dysfunction for which I am prescribed my drugs. Like others treated with antidepressant medications, I am presumed to have a brain disease on the basis of the symptoms I report. Such tenuous connections among symptoms, diagnoses, demonstrable physical pathologies, treatments, and outcomes undermine the validity of the disease model for psychiatry.
In my view the turn to biology has occurred in part because it is unclear whether psychiatrists are, in fact, treating diseases. That is to say, psychiatry has always fought for legitimacy as a medical specialty. Consequently, psychiatrists have something to gain in terms of professional prestige if they can convince themselves and others that troubled people need their chemical interventions to heal broken brains just as someone with a broken leg needs an orthopedic surgeon. This impulse toward greater legitimacy dovetails with the interests of pharmaceutical companies that make billions of dollars selling psychotropic medications. It is a perfect system: Pharmaceutical companies need diseases for their drugs. The American Psychiatric Association creates and codifies diseases in the DSM. The more diseases patients bring to psychiatrists for treatment, the more psychiatrists’ status is enhanced.
In 2001, Americans spent more than $12 billion on antidepressant medications, the equivalent of $43.85 for every man, woman, and child in the country. U.S. doctors wrote some 24,742,000 prescriptions for Prozac in 1999 alone; and 38 million people worldwide used the drug between 1988 and 2000. One remarkable 10-year period, 1987–96, witnessed a nearly threefold increase in the number of American adolescents being treated with psychiatric medications.
Numerous exposés have demonstrated the way drug companies slant research findings and limit damaging information to gain U.S. Food and Drug Administration (FDA) approval for their pills. (See, for example, Stephen Fried’s Bitter Pills: Inside the Hazardous World of Legal Drugs; David Healy’s Let Them Eat Prozac: The Unhealthy Relationship between the Pharmaceutical Industry and Depression; and Marcia Angell’s The Truth about the Drug Companies: How They Deceive Us and What to Do about It). Drug companies compromise the credibility of their studies when they:
- Commission multiple studies to assess a drug’s effectiveness, then report only the research most flattering to their product.
- Fire doctors who serve as well-paid consultants on effectiveness studies if they report negative findings, thereby putting pressure on doctors to search the data for positive results.
- Hire multiple physician-researchers to publish slightly different versions of the same set of complimentary findings, in several journals.
- Compare their medication with others by giving control groups only the lowest effective dosage of the competing drug.
- Devise questions to measure their drug’s symptom relief in ways that maximize the likelihood of positive reports, then use different measurement procedures to evaluate the effectiveness of competing drugs.
- Fail to systematically ask study respondents about certain side effects, thus ensuring that rates of adverse reactions are greatly minimized. (Doctors carry some blame here, too, since they rarely report adverse effects to the FDA after a drug has appeared on the market, which they are expected to do. As a result, troublesome side effects are enormously underestimated for many drugs.)
- Limit drug trials to short periods of time, often only several weeks. Drug companies rarely do long-term studies to determine how large populations experience their medications over time.
Pharmaceutical companies have infiltrated every area of medical research, training, and practice. Continuing education programs for physicians are subsidized by pharmaceutical companies and dominated by speakers on their payrolls. Physicians are paid handsome bounties for each person they help to enroll in a company’s drug study. Major hospitals, research centers, and universities now depend substantially on support from drug companies. Drug representatives lavish doctors with free samples and gifts. The industry’s lobby in Washington is larger than that of any other private interest. And many researchers who publish in the most prestigious medical journals have financial ties to the companies, as do many experts who sit on the FDA panels that decide drugs’ futures. “Drug money” has created so many conflicts of interest that the quality of knowledge available to physicians, regulatory agencies, and patients has been tainted.
The anthropologist Tanya Luhrmann calls psychopharmacology “the great, silent dominatrix,” in her book Of Two Minds: The Growing Disorder in American Psychiatry. “More and more psychiatrists spend more of their time prescribing medications,” she writes. “More people are involved in the research; more patients (these days) are probably touched by these agents than by anything else the psychiatric profession does.” And yet, often the experimental studies involving psychotropic drugs are inconclusive. As David Healy relates in his book The Creation of Psychopharmacology, rarely does an antidepressant medication far exceed the placebo in effectiveness. The newer classes of antidepressants, the SSRIs (selective serotonin reuptake inhibitors—such as Prozac, Paxil, Celexa, and Zoloft), hyped as the latest wonder drugs, are no more effective than the categories of medications that were discovered in the late 1950s (the tricyclic antidepressants, for instance). And there is much that the medical studies cannot measure. As the sociologist Allan Horwitz, in his book Creating Mental Illness, puts it:
How does the effectiveness of the medication compare with entering a new career, joining a gym, going to religious confession, or returning to school? Would a disorder respond better to an entirely different kind of therapy than to medication? Would people who suffer from distressing romantic relationships gain more from entering new relationships than from taking an antidepressant? We cannot, of course, design an experimental study that provides people with new romantic partners, so we don’t know. . . . The finding that receiving a particular medication is superior to not receiving this medication indicates nothing about the effectiveness of medication compared to alternatives such as changing social circumstances or providing other sorts of therapies.
Socializing patients and doctors to the idea that bothersome moods are biological diseases occurs on two fronts. First, all of us are instructed daily via drug advertisements that we need not put up with unpleasant and “abnormal” feelings when eradicating them is so easy. Second, pharmaceutical companies sponsor drug seminars and employ a small army of salesmen to give doctors the tools that will make their patients happy. Most antidepressants are prescribed by family physicians who rely on their own often limited clinical experience with psychiatric symptoms, what they are told by drug company representatives, and what they can learn in the Physicians’ Desk Reference. Although doctors are frequently unsure whether it is a patient’s life circumstance or biology that is causing bad feelings, they largely fall back on biomedical discourse and dole out the pills.
A friend from New York City told me that after she reported to her doctor that she had become “edgy and irritable” following the September 11 attacks, he immediately suggested that she try Paxil. What does it say about the current state of medicine that healthy people experiencing appropriate emotions are routinely treated with powerful medications they do not need? Given the vast numbers of people currently taking psychotropic medications, it seems clear that Americans are rapidly moving closer to the idea that virtually any feeling short of complete happiness is unacceptable.
Certainly, medications should be available to people whose illnesses have compromised their capacity to function in the world. But I do believe that doctors practice bad medicine when they ignore the reality that life is difficult and hand out medications for the normal pains of daily living.
In my many interviews with people being treated with medications for depression or manic depression, I’ve found that the great majority of them attribute their bad, often crippling feelings to immediate life experiences as well as to brain chemistry. One young woman, Alice, was an exception. Could it be, she suggested, that manic-depressive illness is most likely to arise in a manic society?
I think that it’s important to think about the way society . . . is constructing these diseases [she said]. Why has bipolar become so popular right now? What is it about this time and the culture that has brought this diagnosis out? I have a quick answer, which is there’s great disparity in this country, and there’s great inequalities, and I think there’s tremendous motion. Things go very, very quickly now. You have 170 channels on TV, and kids can’t sit still for more than 10 minutes, and everything goes from, “It’s wonderful, God Bless America,” to people starving in our own streets, [and the government] sending people out to kill people. So in that way the disease, the illness . . . can become very socially constructed. . . . That’s where I could say, “Maybe it doesn’t even have to be biological. Maybe you don’t even need the predisposition for it to manifest itself.”
Why should we be surprised at the explosion of depressive illness in a society where the old sources of stability—family, workplace, neighborhood, community—have frayed? Why should we be surprised that up to 40 percent of unemployed women receiving welfare in order to care for their children at home screen positively for depression? Why should we be surprised that increasing numbers of middle-class, white-collar workers are joining a new “anxious class” as corporations downsize and reengineer their companies, shipping jobs to places with better “business climates”? Why should we be surprised that among the best predictors of hospital admission rates for major mental illness is the state of the economy?
The prevailing ideology in psychiatry seeks to change only the patient’s neurotransmitters. In this way, it is even more conservative than earlier attempts by psychiatrists to reshape the patient’s self, to fit him or her more comfortably into society through talk therapy. Neither approach recognizes that solutions to human ills might best be achieved by restructuring society itself.
We can see why healers gravitate toward individualistic treatments of pain. It is far easier to change individuals than it is to change ingrained social structures. Still, if we accept that factors such as poverty and gross inequality are implicated in emotional illnesses, we must also accept that treatments focusing exclusively on brains are necessarily incomplete. Worse, there is the danger that the disease metaphor will blunt our collective sensitivity to social problems and diminish our commitment to solving them.
The relentless medicalization of behaviors has bred a nation of victims. Given the ever-expanding list of moods and behaviors over which we presumably have no control, nearly everyone these days can claim victim status of one sort or another. Should you draw the anger of family and friends by drinking too much, by craving sex too much, by maxing out your credit cards at shopping malls, by ignoring others in favor of your computer, or by threatening your health through food bingeing, you may claim an addiction. If you cannot control your anger, your abusive behavior toward family members, your difficulty accepting authority, your anxiety at cocktail parties, or your aversion to housework, you can find a doctor with a diagnosis for the problem and medications to minimize the symptoms. Biology, as Tanya Luhrmann puts it, has become “the great moral loophole of our age”—a remarkable transformation in a society founded on an ethic of responsibility and hardy individualism.
Traditional psychotherapy, with its emphasis on self-understanding, is directed at making patients more responsible for changing themselves and their behaviors. In contrast, biological psychiatry, lacking interest in the biographies of patients, effectively lifts the burden of responsibility. And if the profession of psychiatry itself has been of two minds about personal responsibility, so also are many patients and those close to them. In more than 150 interviews with emotionally ill people and their families, I have found deep ambivalence about when and whether objectionable behaviors should be understood in terms of illness or character.
Today, of course, psychotropic drugs remain inexact and unpredictable instruments. But as biological psychiatry defines more and more human variation as abnormal, and as the choice of prescriptions grows, the result will be an increasing standardization of behaviors and feelings, and a reduction in freedom, as biological psychiatry inexorably circumscribes ever more tightly the range of acceptable emotions and actions. When a society narrows its definition of tolerable variation, the result is often illness. We’ve seen this clearly in the constricting cultural standard of beauty, which has led to the burgeoning incidence of anorexia nervosa and bulimia. Hyperactivity is another diagnosis whose incidence has grown. Aided no doubt by the availability of drugs like Ritalin, the disease has crept outward from its core population, from children who cannot concentrate to children whose behaviors may be troublesome but hardly abnormal. Once a diagnosis is created, the number of people with the disease expands. Individuals formerly considered unconventional join the ranks of the sick. The social critic Francis Fukuyama draws a compelling analog between Prozac and Ritalin as two drugs that “exchange one normal behavior in favor of another that someone thinks is socially preferable. . . .”
There is a disconcerting symmetry between Prozac and Ritalin. The former is prescribed heavily for depressed women lacking in self-esteem; it gives them more of the alpha-male feeling that comes with high serotonin levels. Ritalin, on the other hand, is prescribed largely for young boys who do not want to sit still in class because nature never designed them to behave that way. Together, the two sexes are gently nudged toward that androgynous median personality, self-satisfied and socially compliant, that is the current politically correct outcome in American society.
I read a disturbing newspaper article a few years ago about a federally funded study designed to isolate the genetic basis for violent behavior. Researchers spoke about the relevance of their work for social policy, noting that once children are shown to have a biological predisposition to violence, doctors can treat them with preventive medications. We could be approaching an era when it will seem reasonable to treat people for potential nonconformity. Given that the survival of a democratic society depends on respect for vastly different feelings, opinions, and thoughts, and on the liberty to challenge the status quo, this is greatly troubling.
There is danger in all single-minded models for explaining mental illness. Beginning in the 1960s, for example, a number of sociologists and radical psychiatrists took the useful notion that mental illness is at least in part socially constructed and morphed it into the idea that mental illness is a myth altogether. Such social determinism is as foolish and falls as short of truth as the biological determinism I have been faulting.
The challenge for us all is to understand the economic, cultural, professional, and personal factors that foster false dichotomies in the first place. My quarrel with psychiatry and pharmaceutical companies is not about drugs per se. I am far more bothered by the confluence of factors that leads doctors to routinely medicate for life troubles.
No one can say exactly when normal distress becomes pathological pain. But if we are to err it should be on the side of not medicating a minority who need it rather than medicating the many who do not. This is not a perfect solution. Inevitably, some people will suffer who could have been aided—a very few, one hopes—for the good of a healthier society.
David A. Karp is a professor of sociology at Boston College and the author of Speaking of Sadness: Depression, Disconnection, and the Meanings of Illness (1996) and The Burden of Sympathy: How Families Cope with Mental Illness (2001). His essay is drawn from his latest book, Is It Me or My Meds? Living with Antidepressants, by permission of Harvard University Press. Copyright © 2006 by the President and Fellows of Harvard College.