Monday, August 31, 2020

I’m Still Reading Andrew Sullivan. But I Can’t Defend Him.

 

I’m Still Reading Andrew Sullivan. But I Can’t Defend Him.

He’s one of the most influential journalists of the last three decades, but he’s shadowed by a 1994 magazine cover story that claimed to show a link between race and I.Q.

By Ben Smith NY Times



PROVINCETOWN, Mass. — The only restaurant open late on this sandy tip of Cape Cod is Spiritus Pizza, and men have long gathered on the mottled bricks in front to hang out, gossip and cruise. One night last week, some were blasting disco onto Commercial Street when a slightly stoned Englishman with a salt-and-pepper beard approached them and demanded politely but forcefully that they turn it down.

It was Andrew Sullivan, seeking order.

Mr. Sullivan hasn’t changed much since he arrived in Washington in 1986 with an internship at The New Republic and a veneration of Margaret Thatcher. Among many other convictions, he believes in safe, lawful and relatively quiet streets. I was sitting with him on the back porch of the tiny, yellow cottage he owns here when videos of unrest from Kenosha, Wis., crossed his Twitter feed.

“If the civil authorities are permissive of violence, then that’s a signal to people to commit violence,” he told me, winding himself up for the dire newsletter he would write later in the week. “The idea that it’ll just burn itself out — it just doesn’t work that way,” he said.

I came to Provincetown to better understand why Mr. Sullivan, 57, one of the most influential journalists of his generation and an obvious influence in my own career, is not as welcome as he once was at many mainstream media outlets. But my visit helped me see something more: how Mr. Sullivan is really a fixed point by which we can measure how far American media has moved. He finds himself now on the outside, most of all, because he cannot be talked out of views on race that most of his peers find abhorrent. I know, because I tried.

He was a star in his 20s, when he ran The New Republic, so celebrated that he posed for Annie Leibovitz in a Gap ad in a white T-shirt and a memorably coy expression. He was a master of provocations there that included one that defined him, arguing long before it was part of mainstream political debate that same-sex couples should have the right to marry. But he also published a cover story, an excerpt from “The Bell Curve,” that claimed to show a link between race and I.Q., a decision that has increasingly consumed his legacy.

Mr. Sullivan trended on Twitter on Friday, as his critics there took a paragraph out of context in the uncharitable way people do on social media to suggest that his cries against civil unrest made him a “fascist.” He was trying to argue the opposite: that law and civility are what make democracy possible.

He was, if anything, early to the anti-fascist cause. The chief author of the 2014 Senate torture report, Daniel Jones, told me that Mr. Sullivan’s work helped lead America away from torture. Mr. Sullivan has been warning for years of the Republican Party’s authoritarian turn. And he was among the most prescient about Mr. Trump when, in 2016, he described his rise as an “extinction level event” for American democracy in a New York magazine cover story.

But Mr. Sullivan is, as his friend Johann Hari once wrote, “happiest at war with his own side,” and in the Trump era, he increasingly used the weekly column he began writing in New York magazine in 2016 to dial up criticism of the American left. When the magazine politely showed him the door last month, Mr. Sullivan left legacy media entirely and began charging his fans directly to read his column through the newsletter platform Substack, increasingly a home for star writers who strike out on their own.

He was not, he emphasizes, “canceled.” In fact, he said, his income has risen from less than $200,000 to around $500,000 as a result of the move.

“I’m not playing that card, deliberately,” he told me. “I just think it’s a shame that readers of mainstream newspapers and magazines can’t hear a dissent.”

Mr. Sullivan isn’t really vulnerable to cancellation. He has been around too long, wielded too much influence even to be easily summarized. His access to both a huge online audience and the covers of prestigious magazines has brought him an unusual kind of power. Most people who read him find him at times prescient, at times unhinged. He burned bridges with Republican friends when he publicly doubted Sarah Palin’s announcement in 2008 that she was pregnant. (He still doesn’t buy her story.) The writer Ta-Nehisi Coates, whose blog Mr. Sullivan once promoted when they were both at The Atlantic, wrote that more than any other writer, “Andrew Sullivan taught me how to think publicly,” but also that he didn’t see “me completely as a human being” because of his race.

Mr. Sullivan was in his way among America’s first out gay celebrities, and his largest impact was on gay rights. He seems especially grounded here in Provincetown, where he first spent a summer in 1989, the same year he published the cover story making “The Case for Gay Marriage” in The New Republic. He returned in 1994, joining other H.I.V.-positive men who moved here at the time expecting to die from AIDS. He worked on a book on same-sex marriage that he hoped would be his legacy.

He survived, published the book and left The New Republic in 1996. A $100,000 annual contract for a weekly column in The Times of London allowed him to start The Dish in 2000, which helped create the political blogosphere, with its frantic pace, wide open conversation, and all-in takedowns, called fisking, at which he excelled.

These days, he is a local mainstay, a Birkenstock-wearing bicyclist among the Pete Buttigieg T-shirts, and generally a good-natured one, as long as it’s not too noisy. He takes particular pride that a leading local drag queen, Ryan Landry, wrote him into a song, with a description of a sex act so enthusiastic that Mr. Sullivan told me, accurately, “you can’t print this in The New York Times.”


Even here, he has his critics, in the often insular world of L.G.B.T.Q. politics. “He’s the first top-down gay figurehead who was selected by corporate media,” said the writer and activist Sarah Schulman.

Mr. Sullivan, of course, never pretended to be a grass-roots activist. He’s a proud member of the elite, and his case for marriage was partly conservative — that it would be, as he told me, a “civilizing” influence on gay men who he believed had been emotionally damaged by discrimination. He testified before Congress on marriage equality in 1996, and when a moderate Democrat, Chuck Robb, voted against the Defense of Marriage Act, he quoted Mr. Sullivan.

“The core conflicts that really persisted through the 1980s were about assimilation versus liberation,” said Sasha Issenberg, the author of a recent history of the marriage battle. “The assimilationists won, and Andrew was unquestionably a leader.”

Top-down media influence can be hard to measure, but I know I felt it: As a local news reporter in the early 2000s, I learned about the marriage issue from Mr. Sullivan’s blog. And I pushed New York politicians to take a stand on it, in part because his writing persuaded me it was important, and in part because I wanted one of the biggest bloggers in the country to link to my stories.

The marriage question is so settled now that Mr. Sullivan’s role feels like ancient history. It’s also rarely noted these days that he played as large a role as any journalist in the rise of Barack Obama. His 2007 Atlantic cover story “Why Obama Matters” arrived while many Democrats were on the fence about the Illinois senator, and it helped sway elite opinion and money his way.

“He articulated the rationale for Obama before it was widely apparent,” said Ben Rhodes, who was then one of Mr. Obama’s speechwriters.

The admiration was mutual. When The Dish was moved behind a paywall in 2013, a White House aide passed on a complaint to Mr. Sullivan: Mr. Obama was locked out of a favorite blog. Mr. Sullivan scrambled to set up a special account for the president.

The president and other readers clearly relished what was always exciting about Mr. Sullivan: He was a contrarian, but an intellectually alive one, with eclectic views on Catholicism and social media and beagles that saved him from monotonous provocation. The editor Adam Moss, who ran New York until last year, viewed him as a rare talent and helped him keep his big platform.

But as the American examination of racism has intensified, one of Mr. Sullivan’s convictions has grown further out of step and more unsettling even to those inclined to disagree agreeably with him.

When The Times published an article as part of its “1619” package last year about how old racist beliefs about Black people’s pain tolerance linger in modern medicine, Mr. Sullivan sent the author, Linda Villarosa, an arch note through her website. She’d written in passing of the stereotype “that Black people had large sex organs,” and he asked whether there was data on sex organs that “show that it is a myth.” She forwarded the email to the project’s leader, Nikole Hannah-Jones, asking if the note might have been a prank. In fact, Mr. Sullivan was up late, and tipsy, in London when he sent it, he told me, and meant it as a kind of “gay joke.”

Ms. Villarosa told me she found it a “bizarre” and “unkind” to send a jokey email asking to prove a negative in response to an article about a “corrosive myth that got people killed.”

Then Ms. Hannah-Jones hit him with it on Twitter in the course of a dispute on the 1619 Project.

The flap reminded his colleagues and critics of Mr. Sullivan’s original sin, his decision to put on the cover of the Oct. 31, 1994, New Republic a package titled “Race and I.Q.” The package led with an excerpt from the book “The Bell Curve” by the political scientist Charles Murray and psychologist Richard Herrnstein. They claimed that I.Q. test results are in large part hereditary and reveal differences among races; it produced piles of scientific debunkings. Many — including contributors whom Mr. Sullivan invited to object — saw the piece as a thinly veiled successor to the junk science used to justify American and European racism for decades. Politically, it offered elites an explanation for racial inequality that wasn’t the legacy of slavery, or class, or racism, or even culture, and thus absolved them of the responsibility to fix it. The authors “found a way for racists to rationalize their racism without losing sleep over it,” the political scientist Alan Wolfe wrote in a response in The New Republic.

When George Floyd was killed in Minneapolis in May, Mr. Sullivan said, his editors asked him to “be careful,” suspecting that his views on race in America would not be palatable to their audience in that moment, two senior New York employees told me. He decided instead to take the week off from column writing.

In the previous year, Mr. Sullivan had focused his ire on the politics of race and identity, seemingly relishing the chance to challenge what he saw as an increasingly “woke” mainstream media. But “The Bell Curve” excerpt — which Mr. Sullivan always says that he published but did not embrace — lingered over those pieces and framed criticism of him. One fellow writer, Sarah Jones, called him the “office bigot” on Twitter. The new editor of New York, David Haskell, didn’t push him out because of any new controversy or organized staff revolt, the two New York employees said. Instead, the shift in culture had effectively made his publishing of “The Bell Curve” excerpt — and the fact that he never disavowed it — a firing offense, and Mr. Haskell showed Mr. Sullivan the door before the magazine experienced a blowup over race of the sort that have erupted at other publications.

So what does Mr. Sullivan believe about race? On his back porch looking over the bay, Mr. Sullivan said he was frustrated by the most extreme claims that biology has no connection to our lives. He believes, for instance, that Freudian theories that early childhood may push people toward homosexuality could have some merit, combined with genetics.

“Everything is environmental for the left except gays, where it’s totally genetic; and everything is genetic for the right, except for gays,” he said sarcastically.

I tried out my most charitable interpretation of his view on race and I.Q. (though I question the underpinnings of the whole intellectual project): that he is most frustrated by the notion that you can’t talk about the influence of biology and genetics on humanity. But that he’s not actually saying he thinks Black people as a group are less intelligent. He’d be equally open to the view, I suggested, that data exploring genetics and its connection to intelligence would find that Black people are on average smarter than other groups.

“It could be, although the evidence is not trending in that direction as far as I pay attention to it. But I don’t much,” he said. (He later told me he’s “open-minded” on the issue and thinks it’s “premature” to weigh the data.)

“I barely write about this,” he went on. “It’s not something I’m obsessed with.”

But he also can’t quite stop himself, even as I sat there wishing he would. “Let’s say Jews. I mean, just look at the Nobel Prize. I’m just saying — there’s something there, I think. And I’m not sure what it is, but I’m just not prepared to accept the whole thing is over.”

Mr. Sullivan says that he published the “The Bell Curve” excerpt but did not embrace it.Credit...David Degner for The New York Times

I’ve been reading Mr. Sullivan too long to write him off. I’ve been influenced deeply by him on marriage, torture and other big questions; and I’m aware of how deeply he shaped how we all write for the web. When I nodded along with much of Jamelle Bouie’s criticism of Mr. Sullivan in 2017, I also recognized in Mr. Bouie’s piece the style of fisking that Mr. Sullivan helped popularize almost two decades ago.

I wish Mr. Sullivan would accept that the project of trying to link the biological fiction of race with the science of genetics ought, in fact, to be over.

When I said some of this to Mr. Sullivan, he noted that he had been born and raised in England, and he hasn’t always had perfect footing on American questions of race — though he has seemingly absorbed and mastered so much about American politics.

But his exit out of big media is a very American story. His career, with all its sweep and innovation, can’t ever quite escape that 1994 magazine cover.

Ben Smith is the media columnist. He joined The Times in 2020 after eight years as founding editor in chief of BuzzFeed News. Before that, he covered politics for Politico, The New York Daily News, The New York Observer and The New York Sun. Email: ben.smith@nytimes.com @benyt

A version of this article appears in print on 

Aug. 31, 2020

, Section B, Page 1 of the New York edition with the headline: The Conservative Pundit With a Legacy From 1994. Order Reprints | Today’s Paper | Subscribe

Saturday, August 29, 2020

The Brain Implants that Could change Humanity

 

 

Opinion

The Brain Implants That Could Change Humanity

Brains are talking to computers, and computers to brains. Are our daydreams safe?

Credit...By Derrick Schultz


By Moises Velasquez-Manoff NY TIMES

Contributing Opinion Writer

Jack Gallant never set out to create a mind-reading machine. His focus was more prosaic. A computational neuroscientist at the University of California, Berkeley, Dr. Gallant worked for years to improve our understanding of how brains encode information — what regions become active, for example, when a person sees a plane or an apple or a dog — and how that activity represents the object being viewed. 

By the late 2000s, scientists could determine what kind of thing a person might be looking at from the way the brain lit up — a human face, say, or a cat. But Dr. Gallant and his colleagues went further. They figured out how to use machine learning to decipher not just the class of thing, but which exact image a subject was viewing. (Which photo of a cat, out of three options, for instance.) 

One day, Dr. Gallant and his postdocs got to talking. In the same way that you can turn a speaker into a microphone by hooking it up backward, they wondered if they could reverse engineer the algorithm they’d developed so they could visualize, solely from brain activity, what a person was seeing. 

The first phase of the project was to train the AI. For hours, Dr. Gallant and his colleagues showed volunteers in fMRI machines movie clips. By matching patterns of brain activation prompted by the moving images, the AI built a model of how the volunteers’ visual cortex, which parses information from the eyes, worked. Then came the next phase: translation. As they showed the volunteers movie clips, they asked the model what, given everything it now knew about their brains, it thought they might be looking at. 

The experiment focused just on a subsection of the visual cortex. It didn’t capture what was happening elsewhere in the brain — how a person might feel about what she was seeing, for example, or what she might be fantasizing about as she watched. The endeavor was, in Dr. Gallant’s words, a primitive proof of concept. 

And yet the results, published in 2011, are remarkable. 

The reconstructed images move with a dreamlike fluidity. In their imperfection, they evoke expressionist art. (And a few reconstructed images seem downright wrong.) But where they succeed, they represent an astonishing achievement: a machine translating patterns of brain activity into a moving image understandable by other people — a machine that can read the brain. 

Dr. Gallant was thrilled. Imagine the possibilities when better brain-reading technology became available? Imagine the people suffering from locked-in syndrome, Lou Gehrig’s disease, the people incapacitated by strokes, who could benefit from a machine that could help them interact with the world? 


He was also scared because the experiment showed, in a concrete way, that humanity was at the dawn of a new era, one in which our thoughts could theoretically be snatched from our heads. What was going to happen, Dr. Gallant wondered, when you could read thoughts the thinker might not even be consciously aware of, when you could see people’s memories? 

“That’s a real sobering thought that now you have to take seriously,” he told me recently. 

The ‘Google Cap’ 

For decades, we’ve communicated with computers mostly by using our fingers and our eyes, by interfacing via keyboards and screens. These tools and the bony digits we prod them with provide a natural limit to the speed of communication between human brain and machine. We can convey information only as quickly (and accurately) as we can type or click. 

Voice recognition, like that used by Apple’s Siri or Amazon’s Alexa, is a step toward more seamless integration of human and machine. The next step, one that scientists around the world are pursuing, is technology that allows people to control computers — and everything connected to them, including cars, robotic arms and drones — merely by thinking. 

Dr. Gallant jokingly calls the imagined piece of hardware that would do this a “Google cap”: a hat that could sense silent commands and prompt computers to respond accordingly. 

The problem is that, to work, that cap would need to be able to see, with some detail, what’s happening in the nearly 100 billion neurons that make up the brain. 

Technology that can easily peer through the skull, like the MRI machine, is far too unwieldy to mount on your head. Less bulky technology, like electroencephalogram, or E.E.G., which measures the brain’s electrical activity through electrodes attached to the scalp, doesn’t provide nearly the same clarity. One scientist compares it to looking for the surface ripples made by a fish swimming underwater while a storm roils the lake. 

Other methods of “seeing” into the brain might include magnetoencephalography, or M.E.G., which measures magnetic waves emanating outside the skull from neurons firing beneath it; or using infrared light, which can penetrate living tissue, to infer brain activity from changes in blood flow. (Pulse oximeters work this way, by shining infrared light through your finger.) 

What technologies will power the brain-computer interface of the future is still unclear. And if it’s unclear how we’ll “read” the brain, it’s even less clear how we’ll “write” to it. 

This is the other holy grail of brain-machine research: technology that can transmit information to the brain directly. We’re probably nowhere near the moment when you can silently ask, “Alexa, what’s the capital of Peru?” and have “Lima” materialize in your mind. 

Even so, solutions to these challenges are beginning to emerge. Much of the research has occurred in the medical realm, where, for years, scientists have worked incrementally toward giving quadriplegics and others with immobilizing neurological conditions better ways of interacting with the world through computers. But in recent years, tech companies — including Facebook, Microsoft and Elon Musk’s Neuralink — have begun investing in the field. 

Some scientists are elated by this infusion of energy and resources. Others worry that as this tech moves into the consumer realm, it could have a variety of unintended and potentially dangerous consequences, from the erosion of mental privacy to the exacerbation of inequality. 

Rafael Yuste, a neurobiologist at Columbia University, counts two great advances in computing that have transformed society: the transition from room-size mainframe computers to personal computers that fit on a desk (and then in your lap), and the advent of mobile computing with smartphones in the 2000s. Noninvasive brain-reading tech would be a third great leap, he says. 

“Forget about the Covid crisis,” Dr. Yuste told me. “What’s coming with this new tech can change humanity.” 

Dear Brain 

Not many people will volunteer to be the first to undergo a novel kind of brain surgery, even if it holds the promise of restoring mobility to those who’ve been paralyzed. So when Robert Kirsch, the chairman of biomedical engineering at Case Western Reserve University, put out such a call nearly 10 years ago, and one person both met the criteria and was willing, he knew he had a pioneer on his hands. 

The man’s name was Bill Kochevar. He’d been paralyzed from the neck down in a biking accident years earlier. His motto, as he later explained it, was “somebody has to do the research.” 

At that point, scientists had already invented gizmos that helped paralyzed patients leverage what mobility remained — lips, an eyelid — to control computers or move robotic arms. But Dr. Kirsch was after something different. He wanted to help Mr. Kochevar move his own limbs. 

The first step was implanting two arrays of sensors over the part of the brain that would normally control Mr. Kochevar’s right arm. Electrodes that could receive signals from those arrays via a computer were implanted into his arm muscles. The implants, and the computer connected to them, would function as a kind of electronic spinal cord, bypassing his injury. 

Once his arm muscles had been strengthened — achieved with a regimen of mild electrical stimulation while he slept — Mr. Kochevar, who at that point had been paralyzed for over a decade, was able to feed himself and drink water. He could even scratch his nose. 

About two dozen people around the world who have lost the use of limbs from accidents or neurological disease have had sensors implanted on their brains. Many, Mr. Kochevar included, participated in a United States government-funded program called BrainGate. The sensor arrays used in this research, smaller than a button, allow patients to move robotic arms or cursors on a screen just by thinking. But as far as Dr. Kirsch knows, Mr. Kochevar, who died in 2017 for reasons unrelated to the research, was the first paralyzed person to regain use of his limbs by way of this technology. 

This fall, Dr. Kirsch and his colleagues will begin version 2.0 of the experiment. This time, they’ll implant six smaller arrays — more sensors will improve the quality of the signal. And instead of implanting electrodes directly in the volunteers’ muscles, they’ll insert them upstream, circling the nerves that move the muscles. In theory, Dr. Kirsch says, that will enable movement of the entire arm and hand. 

The next major goal is to restore sensation so that people can know if they’re holding a rock, say, or an orange — or if their hand has wandered too close to a flame. “Sensation has been the longest ignored part of paralysis,” Dr. Kirsch told me. 

A few years ago, scientists at the University of Pittsburgh began groundbreaking experiments on that front with a man named Nathan Copeland who was paralyzed from the upper chest down. They routed sensory information from a robotic arm into the part of his cortex that dealt with his right hand’s sense of touch. 

Every brain is a living, undulating organ that changes over time. That’s why, before each of Mr. Copeland’s sessions, the AI has to recalibrate — to construct a new brain decoder. “The signals in your brain shift,” Mr. Copeland told me. “They’re not exactly the same every day.” 

And the results weren’t perfect. Mr. Copeland described them to me as “weird,” “electrical tingly” but also “amazing.” The sensory feedback was immensely important, though, in knowing that he’d actually grasped what he thought he’d grasped. And more generally, it demonstrated that a person could “feel” a robotic hand as his or her own, and that information coming from electronic sensors could be fed into the human brain. 

Preliminary as these experiments are, they suggest that the pieces of a brain-machine interface that can both “read” and “write” already exist. People cannot only move robotic arms just by thinking; machines can also, however imperfectly, convey information to the brain about what that arm encounters. 

Who knows how soon versions of this technology will be available for kids who want to think-move avatars in video games or think-surf the web. People can already fly drones with their brain signals, so maybe crude consumer versions will appear in coming years. But it’s hard to overstate how life-changing such tech could be for people with spinal cord injuries or neurological diseases. 

Edward Chang, a neurosurgeon at the University of California, San Francisco, who works on brain-based speech recognition, said that maintaining the ability to communicate can mean the difference between life or death. “For some people, if they have a means to continue to communicate, that may be the reason they decide to stay alive,” he told me. “That motivates us a lot in our work.” 

In a recent study, Dr. Chang and his colleagues predicted with up to 97 percent accuracy — the best rate yet achieved, they say — what words a volunteer had said (from about 250 words used in a predetermined set of 50 sentences) by using implanted sensors that monitored activity in the part of the brain that moves the muscles involved in speaking. (The volunteers in this study weren’t paralyzed; they were epilepsy patients undergoing brain surgery to address that condition, and the implants were not permanent.) 

Dr. Chang used sensor arrays similar to those Dr. Kirsch used, but a noninvasive method may not be too far away. 

Facebook, which funded Dr. Chang’s study, is working on a brain-reading helmet-like contraption that uses infrared light to peer into the brain. Mark Chevillet, the director of brain-computer interface research at Facebook Reality Labs, told me in an email that while full speech recognition remains distant, his lab will be able to decode simple commands like “home,” “select” and “delete” in “coming years.” 

This progress isn’t solely driven by advances in brain-sensing technology — by the physical meeting point of flesh and machine. The AI matters as much, if not more. 

Trying to understand the brain from outside the skull is like trying to make sense of a conversation taking place two rooms away. The signal is often messy, hard to decipher. So it’s the same types of algorithms that now allow speech-recognition software to do a decent job of understanding spoken speech — including individual idiosyncrasies of pronunciation and regional accents — that may now enable brain-reading technology. 

Zap That Urge 

Not all the applications of brain reading require something as complex as understanding speech, however. In some cases, scientists simply want to blunt urges. 

When Casey Halpern, a neurosurgeon at Stanford, was in college, he had a friend who drank too much. Another was overweight but couldn’t stop eating. “Impulse control is such a pervasive problem,” he told me. 

As a budding scientist, he learned about methods of deep brain stimulation used to treat Parkinson’s disease. A mild electric current applied to a part of the brain involved in movement could lessen tremors caused by the disease. Could he apply that technology to the problem of inadequate self-control? 

Working with mice in the 2010s, he identified a part of the brain, called the nucleus accumbens, where activity spiked in a predictable pattern just before a mouse was about to gorge on high-fat food. He found he could reduce how much the mouse ate by disrupting that activity with a mild electrical current. He could zap the compulsion to gorge as it was taking hold in the rodents’ brains. 

Earlier this year, he began testing the approach in people suffering from obesity who haven’t been helped by any other treatment, including gastric-bypass surgery. He implants an electrode in their nucleus accumbens. It’s connected to an apparatus that was originally developed to prevent seizures in people with epilepsy. 

As with Dr. Chang or Dr. Gallant’s work, an algorithm first has to learn about the brain it’s attached to — to recognize the signs of oncoming loss of control. Dr. Halpern and his colleagues train the algorithm by giving patients a taste of a milkshake, or offering a buffet of the patient’s favorite foods, and then recording their brain activity just before the person indulges. 

He’s so far completed two implantations. “The goal is to help restore control,” he told me. And if it works in obesity, which afflicts roughly 40 percent of adults in the United States, he plans to test the gizmo against addictions to alcohol, cocaine and other substances. 

Dr. Halpern’s approach takes as fact something that he says many people have a hard time accepting: that the lack of impulse control that may underlie addictive behavior isn’t a choice, but results from a malfunction of the brain. “We have to accept that it’s a disease,” he says. “We often just judge people and assume it’s their own fault. That’s not what the current research is suggesting we should do.” 

I must confess that of the numerous proposed applications of brain-machine interfacing I came across, Dr. Halpern’s was my favorite to extrapolate on. How many lives have been derailed by the inability to resist the temptation of that next pill or that next beer? What if Dr. Halpern’s solution was generalizable? 

What if every time your mind wandered off while writing an article, you could, with the aid of your concentration implant, prod it back to the task at hand, finally completing those life-changing projects you’ve never gotten around to finishing? 

These applications remain fantasies, of course. But the mere fact that such a thing may be possible is partly what prompts Dr. Yuste, the neurobiologist, to worry about how this technology could blur the boundaries of what we consider to be our personalities. 

Such blurring is already an issue, he points out. Parkinson’s patients with implants sometimes report feeling more aggressive than usual when the machine is “on.” Depressed patients undergoing deep brain stimulation sometimes wonder if they’re really themselves anymore. “You kind of feel artificial,” one patient told researchers. The machine isn’t implanting ideas in their minds, like Leonardo DiCaprio’s character in the movie “Inception,” but it is seemingly changing their sense of self. 

What happens if people are no longer sure if their emotions are theirs, or the effects of the machines they’re connected to? 

Dr. Halpern dismisses these concerns as overblown. Such effects are part of many medical treatments, he points out, including commonly prescribed antidepressants and stimulants. And sometimes, as in the case of hopeless addiction, changing someone’s behavior is precisely the goal. 

Still the longer-term issue of what could happen when brain-writing technology jumps from the medical into the consumer realm is hard to forget. If my imagined focus enhancer existed, for example, but was very expensive, it could exacerbate the already yawning chasm between those who can afford expensive tutors, cars and colleges — and now grit-boosting technology — and those who cannot. 

“Certain groups will get this tech, and will enhance themselves,” Dr. Yuste told me. “This is a really serious threat to humanity.” 

The Brain Business 

“The idea that you have to drill holes in skulls to read the brains is nuts,” Mary Lou Jepsen, the chief executive and founder of Openwater, told me in an email. Her company is developing technology that, she says, uses infrared light and ultrasonic waves to peer into the body. 

Other researchers are simply trying to make invasive approaches less invasive. A company called Synchron seeks to avoid opening the skull or touching brain tissue at all by inserting a sensor through the jugular vein in the neck. It’s currently undergoing a safety and feasibility trial. 

Dr. Kirsch suspects that Elon Musk’s Neuralink is probably the best brain-sensing tech in development. It requires surgery, but unlike the BrainGate sensor arrays, it’s thin, flexible and can adjust to the mountainous topography of the brain. The hope is that this makes it less caustic. It also has hairlike filaments that sink into brain tissue. Each filament contains multiple sensors, theoretically allowing the capture of more data than flatter arrays that sit at the brain’s surface. It can both read and write to the brain, and it’s accompanied by a robot that assists with the implantation. 

A major challenge to implants is that, as Dr. Gallant says, “your brain doesn’t like having stuff stuck in your brain.” Over time, immune cells may swarm the implant, covering it with goop. 

One way to try to avoid this is to drastically shrink the size of the sensors. Arto Nurmikko, a professor of engineering and physics at Brown University who’s part of the BrainGate effort, is developing what he calls “neurograins” — tiny, implantable silicon sensors no larger than a handful of neurons. They’re too small to have batteries, so they’re powered by microwaves beamed in from outside the skull. 

He foresees maybe 1,000 mini sensors implanted throughout the brain. He’s so far tested them only in rodents. But maybe we shouldn’t be so sure that healthy people wouldn’t volunteer for “mental enhancement” surgery. Every year, Dr. Nurmikko poses a hypothetical to his students: 1,000 neurograin implants that would allow students to learn and communicate faster; any volunteers? 

“Typically about half the class says, ‘Sure,’” he told me. “That speaks to where we are today.” 

Jose Carmena and Michel Maharbiz, scientists at Berkeley and founders of a start-up called Iota Biosciences, have their own version of this idea, which they call “neural dust”: tiny implants for the peripheral nervous system — arm, legs and organs besides the brain. “It’s like a Fitbit for your liver,” Dr. Carmena told me. 

They imagine treating inflammatory diseases by stimulating nerves throughout the body with these tiny devices. And where Dr. Nurmikko uses microwaves to power the devices, Dr. Carmena and Dr. Maharbiz foresee the use of ultrasound to beam power to them. 

Generally, they say, this kind of tech will be adopted first in the medical context and then move to the lay population. “We’re going to evolve to augmenting humans,” Dr. Carmena told me. “There’s no question.” 

But hype permeates the field, he warns. Sure, Elon Musk has argued that closer brain-machine integration will help humans compete with ever-more-powerful A.I.s. But in reality, we’re nowhere near a device that could, for example, help you master Kung Fu instantaneously like Keanu Reeves in “The Matrix.” 

What does the near future look like for the average consumer? Ramses Alcaide, the chief executive of a company called Neurable, imagines a world in which smartphones tucked in our pockets or backpacks act as processing hubs for data streaming in from smaller computers and sensors worn around the body. These devices — glasses that serve as displays, earbuds that whisper in our ears — are where the actual interfacing between human and computer will occur. 

Microsoft sells a headset called HoloLens that superimposes images onto the world, an idea called “augmented reality.” A company called Mojo Vision is working toward a contact lens that projects monochrome images directly onto the retina, a private computer display superimposed over the world. 

And Dr. Alcaide himself is working on what he sees as the linchpin to this vision, a device that, one day, may help you to silently communicate with all your digital paraphernalia. He was vague about the form the product will take — it isn’t market ready yet — except to note that it’s an earphone that can measure the brain’s electrical activity to sense “cognitive states,” like whether you’re hungry or concentrating. 

We already compulsively check Instagram and Facebook and email, even though we’re supposedly impeded by our fleshy fingers. I asked Dr. Alcaide: What will happen when we can compulsively check social media just by thinking? 

Ever the optimist, he told me that brain-sensing technology could actually help with the digital incursion. The smart earbud could sense that you’re working, for instance, and block advertisements or phone calls. “What if your computer knew you were focusing?” he told me. “What if it actually removes bombardment from your life?” 

Maybe it’s no surprise that Dr. Alcaide has enjoyed the HBO sci-fi show “Westworld,” a universe where technologies that make communicating with computers more seamless are commonplace (though no one seems better off for it). Rafael Yuste, on the other hand, refuses to watch the show. He likens the idea to a scientist who studies Covid-19 watching a movie about pandemics. “It’s the last thing I want to do,” he says. 

‘A Human Rights Issue’ 

To grasp why Dr. Yuste frets so much about brain-reading technology, it helps to understand his research. He helped pioneer a technology that can read and write to the brain with unprecedented precision, and it doesn’t require surgery. But it does require genetic engineering. 

Dr. Yuste infects mice with a virus that inserts two genes into the animals’ neurons. One prompts the cells to produce a protein that make them sensitive to infrared light; the other makes the neurons emit light when they activate. Thereafter, when the neurons fire, Dr. Yuste can see them light up. And he can activate neurons in turn with an infrared laser. Dr. Yuste can thus read what’s happening in the mouse brain and write to the mouse’s brain with an accuracy impossible with other techniques. 

And he can, it appears, make the mice “see” things that aren’t there. 

In one experiment, he trained mice to take a drink of sugar water after a series of bars appeared on a screen. He recorded which neurons in the visual cortex fired when the mice saw those bars. Then he activated those same neurons with the laser, but without showing them the actual bars. The mice had the same reaction: They took a drink. 

He likens what he did to implanting an hallucination. “We were able to implant into these mice perceptions of things that they hadn’t seen,” he told me. “We manipulated the mouse like a puppet.” 

This method, called optogenetics, is a long way from being used in people. To begin with, we have thicker skulls and bigger brains, making it harder for infrared light to penetrate. And from a political and regulatory standpoint, the bar is high for genetically engineering human beings. But scientists are exploring workarounds — drugs and nanoparticles that make neurons receptive to infrared light, allowing precise activation of neurons without genetic engineering. 

The lesson in Dr. Yuste’s view is not that we’ll soon have lasers mounted on our heads that play us “like pianos,” but that brain-reading and possibly brain-writing technologies are fast approaching, and society isn’t prepared for them. 

“We think this is a human rights issue,” he told me. 

In a 2017 paper in the journal Nature, Dr. Yuste and 24 other signatories, including Dr. Gallant, called for the formulation of a human rights declaration that explicitly addressed “neurorights” and what they see as the threats posed by brain-reading technology before it becomes ubiquitous. Information taken from people’s brains should be protected like medical data, Dr. Yuste says, and not exploited for profit or worse. And just as people have the right not to self-incriminate with speech, we should have the right not to self-incriminate with information gleaned from our brains. 

Dr. Yuste’s activism was prompted in part, he told me, by the large companies suddenly interested in brain-machine research. 

Say you’re using your Google Cap. And like many products in the Google ecosystem, it collects information about you, which it uses to help advertisers target you with ads. Only now, it’s not harvesting your search results or your map location; it’s harvesting your thoughts, your daydreams, your desires. 

Who owns those data? 

Or imagine that writing to the brain is possible. And there are lower-tier versions of brain-writing gizmos that, in exchange for their free use, occasionally “make suggestions” directly to your brain. How will you know if your impulses are your own, or if an algorithm has stimulated that sudden craving for Ben & Jerry’s ice cream or Gucci handbags? 

“People have been trying to manipulate each other since the beginning of time,” Dr. Yuste told me. “But there’s a line that you cross once the manipulation goes directly to the brain, because you will not be able to tell you are being manipulated.” 

When I asked Facebook about concerns around the ethics of big tech entering the brain-computer interface space, Mr. Chevillet, of Facebook Reality Labs, highlighted the transparency of its brain-reading project. “This is why we’ve talked openly about our B.C.I. research — so it can be discussed throughout the neuroethics community as we collectively explore what responsible innovation looks like in this field,” he said in an email. 

Ed Cutrell, a senior principal researcher at Microsoft, which also has a B.C.I. program, emphasized the importance of treating user data carefully. “There needs to be clear sense of where that information goes,” he told me. “As we are sensing more and more about people, to what extent is that information I’m collecting about you yours?” 

Some find all this talk of ethics and rights, if not irrelevant, then at least premature. 

Medical scientists working to help paralyzed patients, for example, are already governed by HIPAA laws, which protect patient privacy. Any new medical technology has to go through the Food and Drug Administration approval process, which includes ethical considerations. 

(Ethical quandaries still arise, though, notes Dr. Kirsch. Let’s say you want to implant a sensor array in a patient suffering from locked-in syndrome. How do you get consent to conduct surgery that might change the person’s life for the better from someone who can’t communicate?) 

Leigh Hochberg, a professor of engineering at Brown University and part of the BrainGate initiative, sees the companies now piling into the brain-machine space as a boon. The field needs these companies’ dynamism — and their deep pockets, he told me. Discussions about ethics are important, “but those discussions should not at any point derail the imperative to provide restorative neurotechnologies to people who could benefit from them,” he added. 

Ethicists, Dr. Jepsen told me, “must also see this: The alternative would be deciding we aren’t interested in a deeper understanding of how our minds work, curing mental disease, really understanding depression, peering inside people in comas or with Alzheimer’s, and enhancing our abilities in finding new ways to communicate.” 

There’s even arguably a national security imperative to plow forward. China has its own version of BrainGate. If American companies don’t pioneer this technology, some think, Chinese companies will. “People have described this as a brain arms race,” Dr. Yuste said. 

Not even Dr. Gallant, who first succeeded in translating neural activity into a moving image of what another person was seeing — and who was both elated and horrified by the exercise — thinks the Luddite approach is an option. “The only way out of the technology-driven hole we’re in is more technology and science,” he told me. “That’s just a cool fact of life.” 

Moises Velasquez-Manoff, the author of “An Epidemic of Absence: A New Way of Understanding Allergies and Autoimmune Diseases,” is a contributing opinion writer

Tuesday, August 25, 2020

Gail Sheehy, Journalist, Author and Social Observer, Dies at 83

 

Gail Sheehy, Journalist, Author and Social Observer, Dies at 83

She looked at what makes public figures tick and, in her “Passages” books, how people of every stripe navigate life’s inevitable changes.
 

By Katharine Q. Seelye New York Times


Gail Sheehy, a journalist who plumbed the interior lives of public figures for clues to their behavior, examined societal trends as signposts of cultural shifts and, most famously, illuminated life changes in her book “Passages,” died on Monday at a hospital in Southampton, N.Y. She was 83.
Her daughter, Maura Sheehy, said the cause was complications of pneumonia.
Gail Sheehy a lively participant in New York’s literary scene and a practitioner of creative nonfiction, studied anthropology with Margaret Mead. She applied those skills to explore the cultural upheaval of the 1960s and ’70s and to gain psychological insights into the newsmakers she profiled — among them Hillary Clinton, Margaret Thatcher, Mikhail S. Gorbachev and both Presidents Bush.
In articles for Vanity Fair and New York magazines, her specialty was connecting the dots of a biography to show how character was destiny.
 
She was a star writer at New York and later married its co-founder, Clay Felker, who encouraged her to write “big” stories. In one of her earliest articles, she traveled with Robert F. Kennedy’s 1968 presidential campaign. She wrote presciently about subjects that marked turns in the culture, including blended families and drug addiction.
 
Ms. Sheehy’s 1976 book was a New York Times best-seller for more than three years. The Library of Congress ranked it as one of the 10 most influential books of modern times.


Of her 17 books, the most prominent and influential was “Passages” (1976), which examined the predictable crises of adult life and how to use them as opportunities for creative change. It sold 10 million copies, was named by the Library of Congress as one of the 10 most influential books of modern times and remained on The New York Times’s best-seller list for more than three years.
Give the gift they'll open every day.


As she noted in the book’s foreword, most studies of life’s mileposts were focused on children and older people, but she wanted to look at those in the vast middle. “The rest of us,” she wrote, “are out there in the mainstream of a spinning and distracted society, trying to make some sense of our one and only voyage through its ambiguities.”

But she offered hope to those struggling through middle age, concluding that “older is better.”
Ms. Sheehy built the concept of “passages” into a franchise, spinning off more books and articles that examined other stages of life: “The Silent Passage” (1992), about menopause; “New Passages: Mapping Your Life Across Time” (1995), which proclaimed middle age obsolete and explored new options after age 50; and “Understanding Men’s Passages” (1998).

She capped off the “passages” theme with two later books. After serving as the primary caregiver for several years for Mr. Felker, who died in 2008 at 82, she wrote “Passages of Caregiving: Turning Chaos into Confidence” (2010). And in “Daring: My Passages: A Memoir” (2014), she wrote about her own life, although many reviewers complained that she was not as revealing as they had hoped she would be.
 
“One senses, beneath the surface, something fascinating to be said about her complicated, ambivalent relationship with feminism,” Michelle Goldberg wrote in The New York Times Book Review.
 
“At one point,” Ms. Goldberg said, “she writes of her fear of being linked with radical feminists, ‘angry women whose resentment was turning the sterling silver concept of equal rights into corrosive man-hating sexual warfare.’ Yet in the very next paragraph she says: ‘Slow, incremental changes were not going to get us anywhere. But how could we show the world we were mounting nothing less than a revolution?’”
 
Ms. Sheehy, right, appeared with the feminist writer Gloria Steinem, left, on an urban affairs television program in 1970. The host was Roberta Hammond.Credit...CBS via Getty Images
Still, over a half century, Ms. Sheehy never lost her appetite for chasing a good story.

“Whenever you hear about a great cultural phenomenon — a revolution, an assassination, a notorious trial, an attack on the country — drop everything,” Ms. Sheehy said in a commencement speech in 2016 at the University of Vermont, her alma mater. “Get on a bus or train or plane and go there, stand at the edge of the abyss, and look down into it,” she advised. “You will see a culture turned inside out and revealed in a raw state.”

Gail Merritt Henion was born on Nov. 27, 1936, in Mamaroneck, N.Y., and grew up there, attending its public schools. Her mother, Lillian Rainey Henion, was a homemaker. Her father, Harold Henion, owned an advertising business.

Ms. Sheehy graduated from the University of Vermont in 1958 with a bachelor’s degree in English and home economics. Her first job was as a consumer representative for J.C. Penney.

She married Albert F. Sheehy in 1960 and moved to Rochester, N.Y., where he attended medical school and she worked as a fashion coordinator at McCurdy’s department store, decorating windows. She then interviewed for a job on the fashion page at The Rochester Democrat and Chronicle, though the editor was reluctant to hire her.

“He told me he didn’t want someone to work just a year and then want a family,” she told The Democrat and Chronicle in 2015. “I was very fresh. I said, ‘I didn’t expect a pregnancy exam.’” She said that in those days, the mid-1960s, women were categorized as “either Holy Mother or Frigid Career Girl.”

Still, Ms. Sheehy learned some valuable lessons. “The paper taught me to write on deadline and to see that to get the good stories — to build a career — I had to get in on it early and have vision,” she said.
The couple soon moved back to New York City — they would divorce in the late 1960s — and she landed a job at The New York Herald Tribune, in the women’s section, or what she called “the estrogen department.”
 
Ms. Sheehy with the New York magazine co-founder Clay Felker in 1977. She worked for him and eventually married him. Credit...Don Hogan Charles/The New York Times
One day she “snuck down the back stairs into the testosterone zone,” she told The Democrat and Chronicle, and, only slightly intimidated, approached Mr. Felker, an editor there. She quickly pitched a story about men in Manhattan who held “specimen parties,” using women to bring in more attractive women. He liked the idea and told her to write it, but “write it as a scene.”
Those few words opened worlds for her. At the time, The Herald Tribune was a hotbed of so-called New Journalism, in which writers like Tom Wolfe used the tools of novelists — characters, dialogue and scene-setting — to create compelling narratives.

Ms. Sheehy caught on right away and propelled herself off the women’s pages to cover some of the biggest events of the time. She snared an exclusive interview with Robert Kennedy just before he was assassinated and wrote profiles of Catholic women in Belfast, Northern Ireland, during the sectarian strife that turned into Bloody Sunday.
 
It was in Belfast that the seed for the book “Passages” was planted. She was talking with a boy there when, she wrote, a bullet “blew his face off.” She herself nearly took a bullet, a moment that traumatized her and made her think about what she called “the arithmetic of life.”

Ms. Sheehy attended Columbia University on a fellowship in 1969-1970 and developed her forensic skills studying there with Ms. Mead, the premier anthropologist of her era. When Mr. Felker founded New York in 1968 with the graphic designer Milton Glaser (who died in June), Ms. Sheehy followed him there.

One of Ms. Sheehy’s celebrated articles in 1972 for New York magazine was titled “The Secret of Grey Gardens,” in which she revealed the little-known bohemian life of an aunt of Jacqueline Kennedy Onassis.Credit...New York magazine

Her articles in New York often caused a sensation. In one, in 1972, titled “The Secret of Grey Gardens,” she revealed the little-known bohemian life of Edith Ewing Bouvier Beale, an aunt of Jacqueline Kennedy Onassis, and Ms. Beale’s daughter, known as Little Edie.

Another piece was “Redpants and Sugarman,” in 1971, about a streetwalker and a pimp, for which Ms. Sheehy dressed up as a prostitute to do her reporting. In a disclaimer, she acknowledged that she had made up characters for the article. But “Mr. Felker, to her everlasting horror, took out the disclaimer because he thought it slowed down the article,” Janet Maslin wrote in The Times.
“She landed in a heap of trouble for what still qualifies as a serious ethical breach,” Ms. Maslin added. 
 
Ms. Sheehy faced other criticisms, accused of practicing armchair psychology and training her eye too often on affluent professionals. Roger Gould, a Los Angeles psychiatrist, sued her for plagiarism, saying she had made extensive use of his research in “Passages” without giving him credit; they settled out of court.
Ms. Sheehy and Mr. Felker had a tempestuous, passionate, on-again-off-again romance that, after many years, turned into a stable relationship and, in 1984, into a happy marriage.
They raised Maura, Ms. Sheehy’s daughter from her first marriage, and adopted Mohm Pat, a Cambodian refugee who had lost most of her family during the murderous Pol Pot regime. In addition to Maura, Mohm Sheehy survives her, along with a sister, Patrica Klein; Ms. Sheehy’s companion, Robert Emmett Ginna Jr., a former Harvard professor and a co-founder of People magazine; and three grandchildren.

Ms. Sheehy, who lived in Manhattan, had been visiting Mr. Ginna in Sag Harbor, on Long Island, when she died.

Playing off the “daring” title of her own memoir, Ms. Sheehy started an online Daring Project, in which she asked women to share their stories of daring. She also started a blog, gailsheehy.com, describing her interviews with notable women and their answers to questions about “the chances they took, the fears they overcame, and the early daring moments that catapulted them to the top of their professions.”
 
Julia Carmel contributed reporting.

Rosewood