The Downward Slope | by Michelle Nijhuis | The New York Review of Books

In early 2015 Alex Riley, then twenty-four years old, was working as a researcher at the Natural History Museum in London, using 1

There is no known cure for depression; Riley’s title is ironic. In his book he surveys the development of medical treatments for the condition in the twentieth and early twenty-first centuries, tracking how each approach was conceived, deployed, and—in most cases—discarded. His perspective is that of a patient and a journalist, not a medical expert, but his experience as a researcher makes him alert to the human side of science and skeptical of its fads. He is particularly interested in reducing the stigma that accompanies many treatments even today, including talk therapy, antidepressants, and electroconvulsive therapy (ECT)—and in efforts to expand the reach of treatments to people without access to mental health care.

Riley begins his story in the late nineteenth century, with the philosophical and professional cleavage that has defined the treatment of depression ever since. On one side was Sigmund Freud, who saw depression as the result of childhood trauma and maintained that it could be remedied only through psychoanalysis. On the other was Emil Kraepelin, the great classifier of mental disorders, who saw depression as a primarily physical ailment, to be treated with medical intervention. Freud and Kraepelin never met, Riley tells us, but they were born just three months apart, in 1856, and both began their scientific careers as anatomists.

In 1885, while Freud was studying in Paris with Jean-Martin Charcot, a physician who used hypnosis to treat his mentally disturbed patients, he developed an interest in the unconscious mind and became convinced that it governed much of human behavior. His theories soon dominated the nascent field of psychoanalysis. Karl Abraham, an admirer and frequent correspondent of Freud’s, developed the first psychoanalytic theory of depression in 1911, attributing the condition to a broken maternal bond and a subsequent hostility toward humanity, including the self. Both men argued that psychoanalysis could relieve depression by identifying a patient’s early, often forgotten tragedies and connecting them with his or her current state; Riley describes them as “paleontologists of the mind, digging through the unconscious strata of their depressed patients, hoping to uncover the old bones of a pathological monster.” Freud’s 1917 essay “Mourning and Melancholia,” which characterizes depression as a response to personal loss, is heavily indebted to Abraham’s ideas.

Kraepelin, meanwhile, came to believe that the symptoms of mental disorders, depression included, could be traced to anatomical aberrations in the brain. Instead of plumbing the unconscious for causes, Kraepelin began, as a professor at the University of Heidelberg in the 1890s, to keep detailed records of individual patients’ symptoms, tracking their conditions over time. In his thousands of patient files, he discerned two broad categories: “dementia praecox,” later known as schizophrenia, and “manic-depressive insanity,” which has since been subdivided into depression, mania, and bipolar disorder. (The distinction he drew between psychotic and mood disorders endures to this day.) While Kraepelin saw that patients could spontaneously recover from depression and other mood disorders, he maintained that, like psychotic disorders, they were primarily biological—not psychological—and as such would not be curable until major advances in microscopes and other technologies allowed researchers to study the physical brain.

And so the line was drawn between biological and psychological approaches to treatment—a division that still affects the way depression is treated. Psychoanalysis, Kraepelin thought, was shamefully unscientific, guilty of “the representation of arbitrary assumptions and conjectures as scientific facts” and “generalization beyond measure from single observations.” Freud, for his part, saw Kraepelin as the leader of an enemy faction and denounced his pessimism about prospective cures, accusing him of offering patients “a condemnation instead of an explanation.”

Among the dangers faced by those living with depression are specialists who fail to recognize the complexity of the condition. As Riley’s book makes clear, the generations of practitioners after Freud and Kraepelin included many who dogmatically swore allegiance to a single approach—whether biological or psychological—and refused to consider that different patients might benefit from different remedies. Desperate patients and families sometimes agreed to invasive treatments that not only failed to relieve depression but debilitated or even killed the sufferer.

In the early twentieth century, adherents of the biological theory of depression tried treating patients with nitrous oxide, opium, testosterone, X-rays, and even tooth extraction, all without success. Then, encouraged by reports of individuals who displayed dramatic changes in temperament and behavior after their frontal lobes were damaged in accidents or by surgery, some practitioners began trying to physically excise depression and other mental afflictions from the brain. In 1935 the Portuguese neurologist Egas Moniz supervised the first prefrontal leucotomy, an operation that cut the connection between the frontal lobes and the rest of the brain. Riley writes that while one early examiner reported that leucotomy patients “were severely ‘diminished’ and had exhibited a ‘degradation of personality’ after the surgery,” Moniz claimed that it was a successful “clinical cure” and had alleviated symptoms of depression and other disorders in more than half of his initial patients.

Walter Freeman, an American psychiatrist, took Moniz’s methods several steps further by pioneering the frontal lobotomy, which not only severed the frontal lobes from the rest of the brain but cut out parts of the lobes themselves. Many of his fellow psychiatrists objected to the practice, but the idea that a malfunctioning brain could be fixed as easily as an automobile was popular. “The brain has ceased to be sacred,” The Saturday Evening Post announced in a celebratory article in 1941. The same year, Freeman and a collaborator performed a lobotomy on twenty-three-year-old Rosemary Kennedy, John F. Kennedy’s younger sister, after suggesting to her family that the operation could relieve her intense mood swings. (Not until the 1970s did informed-consent laws and regulations begin to give patients control of their own treatment.) The botched operation reduced her intellectual capacity to that of a two-year-old, and she was institutionalized for the rest of her life.

Undeterred by such tragedies, Freeman introduced a method that he insisted needed no specialized training to perform: he hammered a metal ice pick into the brain cavity through the back of an eye socket, then used it to crudely brutalize the brain tissue. “Why not use a shotgun? It would be quicker!” an outraged colleague wrote to Freeman. Of the estimated 50,000 patients who were lobotomized in the US between 1949 and 1952, about 20 percent were subjected to these transorbital or “icepick” lobotomies. While reliable data are scarce, a British study of 10,000 standard lobotomies performed between 1943 and 1954 found that 6 percent had killed the patient.

Lobotomies continued for as long as they did not only because of Freeman’s zeal but because the operation, in some cases, delivered a dismal kind of relief. The British study, published in 1961, found that 70 percent of patients reported some improvement, and 18 percent no longer required institutionalization. Though the patients who survived lobotomies were profoundly altered, those who had been tortured by delusions or prone to violence before the operation often emerged calmer and more compliant, sometimes enough so that they could live at home with their families. Until other treatments emerged, the lobotomy could look like the best of bad options.

Electroconvulsive therapy was one of the treatments that replaced lobotomies. As Riley notes, philosophers and physicians had observed since at least the 1700s that epileptic seizures seemed to have a therapeutic effect on mentally ill patients. In 1937 the Italian psychiatrist Ugo Cerletti learned that electrical shocks could induce convulsions in pigs, and he boldly applied this method to humans, finding that after induced seizures, patients reported shorter or less frequent depressive episodes. Cerletti’s ECT encountered resistance when it reached the United States, not only because it reminded people of the electric chair but because its side effects at the time included memory loss and broken bones resulting from powerful seizures. Yet it appeared to be remarkably effective in treating certain forms of depression, particularly those that included delusions or psychotic episodes. Between the mid-1940s and 1960s it was a staple of psychiatric treatment.

Today ECT is considered by many specialists to be among the safest and most effective treatments for severe forms of depression. It now employs briefer, more targeted electrical pulses and can succeed in cases where antidepressants do not. Antrim, who experienced significant relief after first undergoing ECT in the 2000s, writes that despite serious initial doubts, he came to see it as “a powerful measure against suicide”:

After ECT, the feeling in my body of immense weight went away; I felt a kind of physical lightness…. After months of waiting to get well, I regained my sense of time passing. These days, I think of ECT as clean power, good electricity added to a wet, saline medium in which electrical signaling has become chaotic and mistimed.

When ECT was falling out of favor in the 1960s, the first generation of antidepressants was on the rise. Pharmaceutical researchers, noting that tranquilizers decreased levels of neurotransmitters such as serotonin and norepinephrine, surmised that an increase in the activity of these molecules would reverse the effects of depression. Monoamine oxidase (MAO) inhibitors, which prevent the breakdown of serotonin and norepinephrine and thereby increase their levels in the body, became available in the late 1950s; tricyclic antidepressants also debuted in the late 1950s, followed by selective serotonin reuptake inhibitors (SSRIs) in the late 1980s.

Antidepressants have benefited from a succession of evangelists, the first of whom was Nathan Kline, the psychiatrist who wrote From Sad to Glad (1974)—at least one edition of which included the words “Depression: You Can Conquer It Without Analysis!” on the cover. Kline energetically popularized the theory that depression was caused by a chemical imbalance in the brain. As Riley points out, while this idea “isn’t categorically wrong, it is still a far cry from being right.” While all three classes of drugs can change brain chemistry within hours, researchers don’t understand why antidepressants usually take weeks to begin relieving symptoms. And though an increase in serotonin and norepinephrine does frequently lead to a decrease in depression symptoms, it doesn’t necessarily follow that serotonin scarcity is a root cause of the disorder.

Riley moves quickly through the long public and professional debates over the widespread use of antidepressants and their short- and long-term effects, noting that studies of antidepressant effectiveness have often yielded contradictory or inconclusive results. Yet as controversial and imperfect as these medications are, they remain among the best tools we have. Riley, who after experimentation found that the SSRI sertraline lifted his depression, describes himself as “overwhelmed with relief” by the existence of antidepressants. Still, he reflects, our dominant explanation for depression is in some ways simply a modification of that proposed by Hippocrates in the fifth century BCE. Hippocrates blamed depression on an excess of black bile; we blame it on a shortage of neurotransmitters.

Like many people with depression, Riley has also been willing to engage in other forms of low-risk experimentation. During his reporting on current research he adopts a Mediterranean diet and a moderate jogging routine. Both seem to reduce his symptoms, and though he acknowledges that the causal connections are unproven, he comes to think of both as “antidepressants that I prescribe.” He also explores the therapeutic potential of psychedelics, and while his own experiment with psylocibin mushrooms is uneventful, he notes that many present-day psychedelic therapies combine elements drawn from the past century of both psychological and biological treatments, a marriage of chemistry and counseling that creates another opportunity, he writes, to “push the two fields of psychiatry into closer union.”

As for that long-running division between biological and psychological treatments of depression, Riley notes that the successors of Freud and Abraham—in their work with patients using psychoanalysis to excavate the early experiences thought to be the primary cause of mental suffering—didn’t necessarily oppose medical interventions for depression, but they typically resorted to them only after the long process of analysis failed. Even today there is sometimes a bias against pharmacological treatments—a perpetuation of an old stigma that can get in the way of what could be a more collaborative approach.

In tracing the many forms of talk therapy that grew out of psychoanalysis, Riley discusses the psychiatrist Aaron Beck, who in the late 1950s proposed a new theory: depression was a product not, as Freud had thought, of “anger turned inward,” but rather of a negative view of one’s current circumstances. Beck’s “cognitive” approach to therapy proposed to help patients recast their perceptions, using one-on-one or group sessions with therapists to guide them toward more forgiving assessments of themselves and others.

This method was initially challenged both by psychiatrists who preferred to rely on their growing pharmacopeia and by behavioral psychologists such as Joseph Wolpe and B.F. Skinner, who believed that human behavior was essentially a collection of learned reactions to external stimuli. Despite this early competition, the cognitive and behavioral schools of psychology eventually merged to produce CBT, which is now so common that it is what most people mean by “therapy.” (Beck, who died last November at the age of one hundred, once joked that the lead character of The Sopranos, the mob boss and sometime psychoanalysis patient Tony Soprano, could have been cured of his panic attacks with two sessions of CBT.)

Among the people Riley interviews is Myrna Weissman, a professor of psychiatry and epidemiology at Columbia University, who with Gerald Klerman in the 1960s and 1970s pioneered the form of talk therapy known as interpersonal psychotherapy. Similar to cognitive therapy, it considers the quality of patients’ social relationships as well as their perceptions of themselves and others. “We were friends,” she says of Beck, whose photograph is displayed in her office. “We were not competing.” Her expression of collegiality feels like a radical departure from a hundred years of internecine rivalries.

“There shouldn’t be one psychotherapy,” Weissman tells Riley. “I think that people who say, ‘This is the psychotherapy,’ are doing a disservice to patients. It’s like saying there should be only Prozac.”

In the 1980s Weissman realized that while psychiatrists had often assumed that depression was a first-world problem—a side effect of modern urban life—there were little if any data on how common it was worldwide, or who was most likely to suffer from it. She and other researchers soon confirmed that depression was at least as common in poorer communities and nations, and that in some countries Western investigators hadn’t recognized it simply because it traveled under different names. Riley’s discussion of this research and its consequences is one of the most moving sections of his book.

Speakers of Luganda, the most common indigenous language in Uganda, don’t have a word for “depression.” They use the terms yo’kwekyawa and okwekubazida, which roughly translate as “self-loathing” and “self-pity” and describe two distinct conditions; the former, which can include thoughts of suicide, is considered more severe. In Zimbabwe in the 1990s, researchers learned that the local Shona language had one word for everyday sadness (suwa) and another for a persistent, ruminative state that fit the clinical description of depression. This term, kufungisisa, which literally translates to “thinking too much,” unlocked communication between practitioners and patients.

Brent Stirton/Getty Images

A friendship bench, Masvingo, Zimbabwe, January 2020

In the early 2000s the Zimbabwean psychiatrist Dixon Chibanda recognized that his rural patients, many of whom were severely stressed by poverty and the multifold impacts of the 2 Riley cannot report a cure for depression, but he does show that there have been some modest advances in how we think about and treat it—including an increased awareness of the value of collaborative approaches. “We can’t kill depression,” Riley writes. “But, with treatment, we can stop it from killing us.”

This content was originally published here.

%d bloggers like this: