Monday, November 30, 2020

Review Of “The Hype Machine”

Review Of “The Hype Machine” 

 BY ALI MINAI 3 Quarks Daily


Given where we find ourselves in this late November of 2020, it is hard to think of a book more relevant or timely than The Hype Machine by Sinan Aral. The author is the David Austin Professor of Management and Professor of Information Technology and Marketing at the Massachusetts Institute of Technology. As one of the world’s foremost experts on social media and its effects, Prof. Aral is the perfect person to look at how this phenomenon has changed the world and the human experience. This is what he sets out to do in his new book, The Hype Machine, published under the Currency Imprint of Random House this September, and with considerable success. 

The book provides an excellent overview of where things stand with social media, its promise and its peril. For anyone looking for a single, accessibly non-technical source of information and insight on these important issues, this book is essential reading. The book is very well-organized, and the logical flow – both across and within chapters – is remarkably smooth. Overall, the book is an easy read that informs and educates the reader without getting mired in technical jargon – no mean feat for a book about a technical field that is rife with jargon. And, while a large proportion of the book simply communicates information on where things stand and how different social media platforms are shaping the lives of their users, Prof. Aral does not shy away from building a useful abstract framework in which to place all this, and to address the complex issues raised as a result. 

The tone of the book is set by its title. The decision to use the term “hype machine” for the social media system enveloping the globe today is both insightful and somewhat jarring. Insightful in that it cuts through the clutter of various functions that can be ascribed to social media and identifies hype as its core function – or at least its core effect. But the name is jarring for the same reason. For all its accessibility, this is an insider’s book, written by someone who sees the nuts and bolts – and the underlying incentives – of the system. For most users of social media, the hype is only part of the experience. Real information, knowledge, and love also flow on social media. People communicate, form relationships, and share experiences; old friends reconnect; families come together; collaborations are born. For people experiencing all this, the hype – the fake news, the loss of privacy, the selling of ads, the ever-changing trends – is largely subliminal. But perhaps the author intended to send a wakeup call to the billions of users too engrossed in their positive experiences on social media to care much about the underlying dangers. Indeed, the sub-title of the book, “How Social Media Disrupts Our Elections, Our Economy, and Our Health – and How We Must Adapt” signals this desire to jolt people awake, though the book is not quite as terrifying as this statement might suggest. It does try to balance the positive and the negative, laying out both the benefits and the hazards of social media, though one does end the book feeling that the positive value of social media at the personal and human level received relatively short shrift at the expense of its large-scale socioeconomic impact. 

These large-scale problems, however, are all too real, and the book really excels in their analysis. In Chapter 1, at the very outset of the book, Prof. Aral identifies three primary forces that underlie the transformations social media is creating – what he calls, “… the trifecta of hypersocilaization, personalized mass persuasion, and the tyranny of trends” (p. 12). This, he says, are the hallmarks of the “New Social Age” – a fundamental transformation of human society across the world. He also states the purpose of the book with admirable clarity: 

My goals in this book are to describe the science of how the Hype Machine works and to explore how it affects our politics, our businesses, and our relationships; to explore the consequences of the Hype Machine for our society, both positive and negative; and to discuss how we can—through company policy, social norms, government regulation, and more advanced software code—achieve its promise while avoiding its peril. (p. 19) 

The book lives up to these goals reasonably well. 

One of the most striking things in reading The Hype Machine is its topical relevance. It is literally a book ripped from the headlines, with election interference, the Mueller Report, President Trump’s impeachment, Infowars propaganda, and even the COVID-19 pandemic and its deniers woven into the text. Ominously, it states that, “…. while social media can help foster a transparent, democratic, egalitarian society, it can also be used to erect a polarized, authoritarian police state. Today we are at a crossroads, caught between the promise and the peril ….” P. 22). Chapter 2 of the book then gets into one of the main perils: All the ways in which social media and its underlying algorithms can – and do – warp the perception of reality on a global scale. The author discusses fake news, election manipulation, science denial, AI-generated deep-fakes, etc., to lay out the perils that are all too clear to many – but perhaps not clear enough to many more. Of special interest is the author’s non-technical overview of his own groundbreaking work on fake news, leading to a depressing confirmation of the old adage about a lie circling the world thrice while truth is still putting on its shoes. 

Chapter 3 is the conceptual heart of the book where the author lays out a description of the Hype Machine that is both abstract and illuminating – a rare combination. In particular, he describes the Hype Machine in terms of three main components: Digital social networks as the substrate; machine learning and data analytics algorithms as the process – rather too cutely termed the “Hype Loop”; and smartphones as the medium. This tripartite system is then posited to be governed by four factors: Money, code, norms, and laws. Interestingly, Prof. Aral uses the term “levers” to describe these factors, indicating that these actuators have a dual nature. On the one hand, they are what the Hype Machine uses to move us; on the other, they also provide a way for us to push back on the machine. This, indeed, is the central thesis of the book at its most abstract level. Having laid out this framework at the beginning, the rest of Chapter 3 is used describe the first two components – the substrate and the process – in considerable detail, with the third – smartphones – getting more cursory treatment. This is a very appealing part of the book, and, for many readers, will be a most illuminating introduction to the concepts, motivations and mechanisms underlying social media from the perspective of the corporations that run it. 

The next seven chapters explicate the issues raised in the first three, ranging from the effect of social media on the brain and the human psyche to the effects of hypersocialization, mass persuasion, global trends, and the wisdom (and madness) of the crowds – all discussed through examples of real situations and illuminated by results from numerous research studies. This middle section of the book is both very useful and very readable, and will satisfy readers ranging from those looking for scientific analysis to those more comfortable with experience and anecdote. As is the case throughout the book, the author strikes a good balance between conceptual – though not overly technical – detail and accessibility, with something for readers at all levels. 

The last two chapters of the book represent the integrative and prescriptive part of the book. In Chapter 11 – fittingly entitled “Social Media’s Promise Is Also Its Peril” – Prof. Aral provides a detailed and insightful discussion of the delicate tradeoff between the benefits and costs of social media. This begins with specific cases such as the role of social media in the protests leading to the Paris terrorist attacks of January 2015 and the influence of the Telegram platform in challenging authoritarian governments, and then gets deeper into issues such as transparency, privacy, free speech, unequal opportunity, and collective action – all of which pose paradoxes that are almost impossible to resolve. The chapter closes with a clear statement of the central challenge that society faces with regard to social media: 

When technology exacerbates the spread of misinformation, terrorism, election manipulation, disrup­tions of public health, and the loss of privacy, and those harms are not priced into the market, sensible government regulation becomes nec­essary. 

But there’s a real danger that ill-conceived legislation will shackle innovation, free speech, productivity, growth, consumer surpluses, and the social and economic benefits of social technologies. When we understand that the sources of social media’s promise are also the sources of the ills we are trying to avoid, it becomes clear that blunt attempts at regulation are likely to fail. Social media regulation must be carefully thought through to preserve the promise while avoiding the peril. (p. 284) 

In the final chapter, Prof. Aral shares his prescription for addressing the perils of the Hype Machine without trading away its promise. One of his recommendations is to allow users portability of their data and social graph (a user’s network of social contacts) across platforms. This seems appealing in principle, but its real-world effectiveness is hard to estimate. To begin with, portability across different types of platforms would be absurd: A user’s social graph on Facebook may comprise friends and family, on LinkedIn mainly professional contacts, and on Twitter complete strangers. The content in these cases would also be very different. Where portability can make a difference is in allowing the emergence of multiple platforms of the same type through market competition, so users can migrate between them if they don’t like their current platform’s policies on data sharing, news filtering, ad targeting, etc. This is clearly the author’s intent, embodied in his hope that 

By encouraging compe­tition, we have a shot at shifting the economic incentives that guide the design of the Hype Machine from a focus on the value that platforms extract from consumers to a focus on the value they should deliver to consumers. (p. 320) 

But for all its simplicity and elegance, this market-based approach appears to rely too much on the rationality of human choices and the objectivity of human values. Caution is in order on both counts. 

Experience, history, and research in behavioral economics all indicate that people derive value much more readily from the confirmation of their biases and the validation of their prejudices than from objective truth. Add to this the limited human capacity for ascertaining the truth or falsity of information in an increasingly complex world, and a policy relying on the probity of human judgment looks shaky at best. For example, those who are most deluded by fake news on Facebook are also the least likely to realize it or feel a need to do anything about it. As the author’s own research shows, sensational lies are more attractive to people than dry truths, which means that, given a choice of platforms, people will continue to flock to those that confirm their biases and satisfy their need for sensationalism, and the platforms will compete less to enhance the authenticity of the information they carry than on how to exploit the human fondness for conspiracy and drama. Competition is an excellent device when applied to concrete things like bread and toasters where value can be determined clearly and immediately. Information, ideas, and opinions are a different class of commodity where value itself is a matter of opinion – and often of conflict. It seems too optimistic to expect market forces to neutralize a tendency rooted so deep in human nature. 

Recognizing this and appreciating the delicacy of the issues involved, Prof. Aral also recommends the establishment of a bipartisan National Commission on Technology and Democracy – “a diverse set of experts …. with scientists, industry representatives, and policy makers who understand the issues and how they interrelate” (p. 319). This too is a good idea in principle but anyone who has observed the trajectory of American politics over the last two or three decades should be skeptical. Since 2000, America has experienced the 9/11 attacks, the Great Recession, two endless wars, foreign interference in national elections, and now a pandemic that has killed more than a quarter million Americans. National commissions, task forces, bipartisan select committees, think tank studies, and special counsels have come and gone without resolution on any of these issues. In a secular, democratic society, epistemic authority flows from institutions and experts rather than from divine sanction or government diktat. Ironically, one effect of the rise of social media has been to devalue the opinion of real experts in the eyes of the public and the representatives they elect – as illustrated vividly in the public and political response to the COVID-19 pandemic. A society that cannot heed scientists and physicians in the midst of mass death is unlikely to let a commission of experts settle issues like free speech, privacy, and political expression. The course proposed by Prof. Aral is a reasonable one – and that may be its biggest shortcoming. 

One final note of caution is in order with any prescription of policies for social media. Each social media platform is a highly distributed, self-organizing complex system. These systems link together rather haphazardly to form a larger distributed complex system that can be identified with the author’s Hype Machine. And finally, this supremely complex system is in continuous interaction with another far more complex system: human society. If there is one thing we have learned about such highly distributed and dynamic complex systems, it is the dominance of unintended consequences for well-intended actions. Equally important is the human propensity to exploit any unforeseen opportunities that may arise from these unintended consequences. Indeed, that is how we have ended up with fake news and conspiracy theories dominating social media. It is a little too optimistic to believe that more well-intentioned interventions by committees of experts will have predictable consequences. Indeed, the book itself describes several ways in which the European Union’s well-meaning GDPR policy on Internet privacy has had seriously negative unintended consequences. It is hard to see how the approach recommended in the book would avoid a similar outcome. 

Stylistically, the book is very readable, if slightly dry. Case studies and anecdotes are discussed in a matter-of-fact way, which increases understanding but perhaps at some cost to reader engagement. Most of the important phenomena discussed are approached through rigorous scientific studies – all cited in the very valuable and copious notes at the end of the book – though a side effect of these citations is the subtle implication that all useful work on the issues relating to social media is occurring at a few elite academic institutions in collaboration with a few very large corporations. This is exacerbated further by author’s propensity to describe the investigators of almost all cited studies as “friends” and “colleagues”, though this is probably no more than an expression of collegiality. 

In summary, The Hype Machine is an excellent and very timely addition to the genre of writings on the Internet Age. Its formulation of the issues posed by the rise of social media, its description of the system, and its analysis of promise and peril are exceptionally informative and insightful. As such, this book should be essential reading for anyone interested in understanding the current moment in human socioeconomic development. Ultimately, The Hype Machine is a book more about describing a set of immense problems facing human society today than about prescribing short-term solutions, but it is surely a bold step on the path to a solution in the long-term. 

Saturday, November 28, 2020

Indiana Asks the Supreme Court to Let It Strip Rights From Same-Sex Parents

Indiana Asks the Supreme Court to Let It Strip Rights From Same-Sex Parents

The justices have shown interest in a case that could begin the rollback of marriage equality.


On Monday, Indiana Attorney General Curtis Hill asked the Supreme Court to strip same-sex couples of their equal parenting rights. He did so at the request of the court, which is considering taking up his case. Hill implored the new conservative majority to rule that states may deny married same-sex couples the right to be recognized as parents of their own children. The case gives SCOTUS an opportunity to start chipping away at Obergefell v. Hodges by allowing states to withhold marital privileges from same-sex spouses. If the majority wants to begin eroding Obergefell, they will probably start here.

What’s strange about this case, Box v. Henderson, is that it poses a question the Supreme Court has already answered—twice. The plaintiffs are eight married lesbian couples in Indiana who used a sperm donor to conceive. When a married opposite-sex couple uses a sperm donor, Indiana recognizes the birth mother’s husband as the child’s parent. When a married same-sex couple does the same thing, however, the state refuses to list the birth mother’s wife as the child’s parent. In both instances, the second parent has no biological connection to the child; Indiana’s decision to extend parental rights to the nonbiological husbands of birth mothers, but not the wives of birth mothers, is sheer discrimination.

On two different occasions, the Supreme Court prohibited this kind of mistreatment. In Obergefell v. Hodges, the court held that the Constitution entitles same-sex couples to marriage “on the same terms and conditions as opposite-sex couples.” Most courts understood that this requirement compelled them to provide the equal benefits to married same-sex parents. In Florida, for instance, a federal judge held that Obergefell “plainly requires” the state to list married lesbian couples as the parents of a child conceived with a sperm donor, since the state grants this right to married opposite-sex couples. (Florida’s Republican attorney general settled the case in apparent recognition that an appeal would be doomed.) When the Arkansas Supreme Court kept a birth mother’s wife off their child’s birth certificate, SCOTUS shot it down without even bothering to hear oral arguments. In 2017’s Pavan v. Smith, the court unequivocally ruled that states must issue birth certificates on equal terms to same-sex and opposite-sex couples. It announced a rule: If a state lists a birth mother’s husband as a parent despite his lack of biological connection, it must list a birth mother’s wife as a parent, too.

The justices have indicated that they’re interested in taking up the case.

Three and a half years after Pavan, Indiana is seeking to abolish this rule in Box v. Henderson. Its efforts have been aided by a mysterious delay at the 7th U.S. Circuit Court of Appeals. A three-judge panel for the 7th Circuit heard arguments in Box v. Henderson on May 22, 2017. For reasons that remain unclear, the panel waited to issue its decision until Jan. 17, 2020—a 32-month delay. (The average gap between arguments and a decision is about three months.) All three judges on the panel are conservative Republican appointees. Yet they unanimously agreed that Obergefell and Pavan compel Indiana to list same-sex parents on their child’s birth certificate when they conceive via artificial insemination.

Had the panel issued its decision within a typical time frame—that is, in 2017—Indiana might have just given up. In 2017, all five justices who joined both Obergefell and Pavan were still on the bench. Justice Anthony Kennedy had not yet retired, and Ruth Bader Ginsburg was alive. If Indiana decided to appeal anyway, SCOTUS surely would have affirmed the 7th Circuit or simply turned away the case.

Today, the Supreme Court looks very different. Kennedy, who authored Obergefell, has been replaced by the far more conservative Justice Brett Kavanaugh, who has not shown support for LGBTQ rights. Ginsburg has been replaced by Justice Amy Coney Barrett, who is likely even more conservative than Kavanaugh. And there are already indications that the new court has its knives out for Obergefell. The justices were set to consider Box v. Henderson at their private conference on Sept. 29. One day before that conference, though, the court asked the plaintiffs to respond to Indiana’s petition. This unusual step indicates that the justices are interested in taking up the case. The plaintiffs complied, urging the court to turn away Indiana’s appeal. Now the justices are scheduled to consider the case at their Dec. 11 conference.

Hill, the Indiana attorney general, has tried to distinguish Box v. Henderson from Pavan by misrepresenting state law. He claims that the case is about a state’s right to acknowledge “biological distinction between males and females.” According to Hill, Indiana law only presumes that a birth mother’s husband is the father of her child. A birth mother’s wife, by contrast, “is never the biological father,” so she does not deserve the presumption of parentage. But this argument uses biology as a smoke screen for discrimination. No husband is ever asked to prove his paternity before he is listed on his wife’s birth certificate. Why must a wife undergo this indignity?

Moreover, it’s untrue that a birth mother’s wife always has “no biological connection” to her child, as Hill insists. One set of plaintiffs in this case, a lesbian couple, prove this point: One partner provided an egg, and her wife carried the child. Thus, the birth mother and her wife have a biological link to their offspring. Again: If husbands receive a presumption of parentage because they may be a biological parent, why shouldn’t wives? After all, contrary to Hill’s archaic view, a birth mother’s wife might be a biological parent, too.

Hill’s feeble efforts to distinguish Box v. Henderson from Pavan are probably a pretext to give SCOTUS a shot at eroding Obergefell itself. If the court’s new conservative supermajority sides with Indiana, it will allow states to resume discriminating against same-sex parents—and, by extension, reviving second-class marriages for gay people. Opposite-sex couples would remain legal parents of their children, including those conceived with a donor. Same-sex couples, by contrast, would lose this presumption of parentage; their marriage would no longer entitle them to equal rights over their child. Indiana provides a chilling example: If Hill prevails, the wives of birth mothers will have to go through stepparent adoption, an arduous, invasive process that costs more than $4,000.

This term has already brought ominous signs for marriage equality. Justices Clarence Thomas and Samuel Alito have called for the court to either overturn Obergefell or let government employees discriminate against same-sex couples. In October, a majority of justices indicated that they will force Philadelphia to fund a foster care agency that refuses to work with same-sex parents. With Kennedy and Ginsburg gone, the walls are closing in on LGBTQ Americans’ right to equal marriage. And if the court takes up Box v. Henderson, same-sex couples will need to prepare for a ruling that could turn them into legal strangers to their own kids.

Friday, November 20, 2020

Hospitals Know What’s Coming

Hospitals Know What’s Coming

“We are on an absolutely catastrophic path,” said a COVID-19 doctor at America’s best-prepared hospital.

ED YONG The Atlantic

Perhaps no hospital in the United States was better prepared for a pandemic than the University of Nebraska Medical Center in Omaha.

After the SARS outbreak of 2003, its staff began specifically preparing for emerging infections. The center has the nation’s only federal quarantine facility and its largest biocontainment unit, which cared for airlifted Ebola patients in 2014. The people on staff had detailed pandemic plans. They ran drills. Ron Klain, who was President Barack Obama’s “Ebola czar” and will be Joe Biden’s chief of staff in the White House, once told me that UNMC is “arguably the best in the country” at handling dangerous and unusual diseases. There’s a reason many of the Americans who were airlifted from the Diamond Princess cruise ship in February were sent to UNMC.

In the past two weeks, the hospital had to convert an entire building into a COVID-19 tower, from the top down. It now has 10 COVID-19 units, each taking up an entire hospital floor. Three of the units provide intensive care to the very sickest people, several of whom die every day. One unit solely provides “comfort care” to COVID-19 patients who are certain to die. “We’ve never had to do anything like this,” Angela Hewlett, the infectious-disease specialist who directs the hospital’s COVID-19 team, told me. “We are on an absolutely catastrophic path.

 

To hear such talk from someone at UNMC, the best-prepared of America’s hospitals, should shake the entire nation. In mid-March, when just 18 Nebraskans had tested positive for COVID-19, Shelly Schwedhelm, the head of the hospital’s emergency-preparedness program, sounded gently confident. Or, at least, she told me: “I’m confident in having a plan.” She hoped the hospital wouldn’t hit capacity, “because people will have done the right thing by staying home,” she said. And people did: For a while, the U.S. flattened the curve.

But now about 2,400 Nebraskans are testing positive for COVID-19 every day—a rate five times higher than in the spring. More than 20 percent of tests are coming back positive, and up to 70 percent in some rural counties—signs that many infections aren’t being detected. The number of people who’ve been hospitalized with the disease has tripled in just six weeks. UNMC is fuller with COVID-19 patients—and patients, full stop—than it has ever been. “We’re watching a system breaking in front of us and we’re helpless to stop it,” says Kelly Cawcutt, an infectious-disease and critical-care physician.

Cawcutt knows what’s coming. Throughout the pandemic, hospitalizations have lagged behind cases by about 12 days. Over the past 12 days, the total number of confirmed cases in Nebraska has risen from 82,400 to 109,280. That rise represents a wave of patients that will slam into already beleaguered hospitals between now and Thanksgiving. “I don’t see how we avoid becoming overwhelmed,” says Dan Johnson, a critical-care doctor. People need to know that “the assumption we will always have a hospital bed for them is a false one.”

What makes this “nightmare” worse, he adds, “is that it was preventable.” The coronavirus is not unstoppable, as some have suggested and as New Zealand, Iceland, Australia, and Hong Kong have resoundingly disproved—twice. Instead, the Trump administration never mounted a serious effort to stop it. Whether through gross incompetence or deliberate strategy, the president and his advisers left the virus to run amok, allowed Americans to get sick, and punted the consequences to the health-care system. And they did so repeatedly, even after the ordeal of the spring, after the playbook for controlling the virus became clear, and despite months of warnings about a fall surge.

Not even the best-prepared hospital can compensate for an unchecked pandemic. UNMC’s preparations didn’t fail so much as the U.S. created a situation in which hospitals could not possibly succeed. “We can prepare over and over for a wave of patients,” says Cawcutt, “but we can’t prepare for a tsunami.”


A full hospital means that everyone waits. COVID-19 patients who are going downhill must wait to enter a packed intensive-care unit. Patients who cannot breathe must wait for the many minutes it takes for a nurse elsewhere in the hospital to remove cumbersome protective gear, run over, and don the gear again. On Tuesday, one rapidly deteriorating patient needed to be intubated, but the assembled doctors had to wait, because the anesthesiologists were all busy intubating four other patients in an ICU and a few more in an emergency room.

None of the people I spoke with would predict when UNMC will finally hit its capacity ceiling, partly because they’re doing everything to avoid that scenario, and partly because it’s so grim as to be almost unthinkable. But “we’re rapidly approaching that point,” Hewlett said.

When it arrives, people with COVID-19 will die not just because of the virus, but because the hospital will have nowhere to put them and no one to help them. Doctors will have to decide who to put on a ventilator or a dialysis machine. They’ll have to choose whether to abandon entire groups of patients who can’t get help elsewhere. While cities like New York and Boston have many big hospitals that can care for advanced strokes, failing hearts that need mechanical support, and transplanted organs, “in this region, we’re it,” Johnson says. “We provide care that can’t be provided at any other hospital for a 200-mile radius. We’re going to need to decide if we continue to offer that care, or if we admit every single COVID-19 patient who comes through our door.”

Read: How many Americans are about to die?

During the spring, most of UNMC’s COVID-19 patients were either elderly people from nursing homes or workers in meatpacking plants and factories. But with the third national surge, “all the trends have gone out the window,” Sarah Swistak, a staff nurse, told me. “From the 90-year-old with every comorbidity listed to the 30-year-old who is the picture of perfect health, they’re all requiring oxygen because they’re so short of breath.”  

This lack of pattern is a pattern in itself, and suggests that there’s no single explanation for the current surge. Nebraska reopened too early, “when we didn’t have enough control, and in the absence of a mask mandate,” Cawcutt says. Pandemic fatigue set in. Weddings that were postponed from the spring took place in the fall. Customers packed into indoor spaces, like bars and restaurants, where the virus most easily finds new hosts. Colleges resumed in-person classes. UNMC is struggling not because of any one super-spreading event, but because of the cumulative toll of millions of bad decisions.

When the hospital first faced the pandemic in the spring, “I was buoyed by the realization that everyone in America was doing their part to slow down the spread,”  Johnson says. “Now I know friends of mine are going about their normal lives, having parties and dinners, and playing sports indoors. It’s very difficult to do this work when we know so many people are not doing their part.” The drive home from the packed hospital takes him past rows of packed restaurants, sporting venues, and parking lots.

To a degree, Johnson sympathizes. “I don’t think people in Omaha thought we could ever have something that resembles New York,” he told me. “To be honest, in the spring, I would have thought it extremely unlikely.” But he adds that the Midwest has taken entirely the wrong lesson from the Northeast’s ordeal. Instead of learning that the pandemic is controllable, and that physical distancing works, people instead internalized “a mistaken belief that every curve that goes up must come down,” he said. “What they don’t realize is that if we don’t change anything about how we’re conducting ourselves, the curve can go up and up.”

Speaking on Tuesday afternoon, Nebraska Governor Pete Ricketts once again refused to issue a statewide mask mandate. He promised to tighten restrictions once a quarter of the state’s beds are filled with COVID-19 patients, but even then, some restaurants will still offer indoor dining; gyms and churches will remain open; and groups of 10 people will still be able to gather in enclosed spaces. Ricketts urged Nebraskans to avoid close contact, confined areas, and crowds, but his policies nullify his pleas. “People have the mistaken belief that if the government allows them to do something, it is safe to do,” Johnson said.

Read: The pandemic safety rule that really matters

There are signs that citizens and businesses are acting ahead of policy makers. Some restaurants are ceasing indoor dining even without a prohibition. Parents are pulling their children out of schools and sports leagues. “I have heard from more friends and family about COVID-19 in the last two weeks than I have in the previous six months, expressing support and a change in attitudes,” Johnson said.

But COVID-19 works slowly. It takes several days for infected people to show symptoms, a dozen more for newly diagnosed cases to wend their way to hospitals, and even more for the sickest of patients to die. These lags mean that the pandemic’s near-term future is always set, baked in by the choices of the past. It means that Ricketts is already too late to stop whatever UNMC will face in the coming weeks (but not too late to spare the hospital further grief next month). It means that some of the people who get infected over Thanksgiving will struggle to enter packed hospitals by the middle of December, and be in the ground by Christmas.


Officially, Nebraska has 4,223 hospital beds, of which 1,165—27 percent—are still available. But that figure is deceptive. It includes beds for labor and deliveries, as well as pediatric beds that cannot be repurposed. It also says nothing about how stretched hospitals have already become in their efforts to create capacity. UNMC has postponed elective surgeries—those which could be deferred for four to 12 weeks. Patients with strokes and other urgent traumas aren’t getting the normal level of attention, because the pandemic is so all-consuming. Clinical research has stopped because research nurses are now COVID-19 nurses. The hospital is forced to turn down many requests to take in patients from rural hospitals and neighboring states that are themselves almost out of beds.

Empty hospital beds might as well be hotel beds without doctors and nurses to staff them. And though health-care workers are resilient, “many of us feel like we haven’t had a day off since this thing began,” Hewlett says. The current surge is pushing them to the limit because people with COVID-19 are far sicker than the average patient. In an ICU, they need twice as much attention for three times the usual stay. To care for them, UNMC’s nurses and respiratory therapists are now doing mandatory overtime. The hospital has tried to hire travel nurses, but with the entire country calling for help, the pool of reinforcements is dry. “Even before COVID-19 hit, we were short-staffed,” says Becky Long, a lead nurse on a COVID ICU floor. Of late, there have been days when the hospital had 45 to 60 fewer nurses than it needed. “Every time I’ve been at work, I’ve thought: This is going to be the final straw. But somehow we continue to make it work, and I truly have no idea how.”

Read: The end of the pandemic is now in sight

Before COVID-19, Long worked in oncology. Death is no stranger to her, but she tells me she can barely comprehend the amount she has seen in recent weeks. “I used to be able to leave work at work, but with the pandemic, it follows me everywhere I go,” she said. “It’s all I see when I come home, when I look at my kids.”

Long and other nurses have told many families that they can’t see their dying loved ones, and then sat with those patients so they didn’t have to die alone. Lindsay Ivener, a staff nurse, told me that COVID-19 had recently killed an elderly woman whom she was caring for, the woman’s husband, and one of her grandchildren. A second grandchild had just been admitted to the hospital with COVID-19. “It just tore this whole family apart in a month,” Ivener said. “I couldn’t even cry. I didn’t have the energy.”

Until recently, Ivener worked in corporate America as a retail buyer and inventory manager. Wanting to help people, she retrained as a nurse and graduated this May. “I’ve only worked as a nurse during a pandemic,” she told me. “It’s got to get better, right?”

 

Monday, November 16, 2020

Wikipedia, “Jeopardy!,” and the Fate of the Fact

 Wikipedia, “Jeopardy!,” and the Fate of the Fact

In the Internet age, it can seem as if there’s no reason to remember anything. But information doesn’t always amount to knowledge.

By Louis Menand The New Yorker

Is it still cool to memorize a lot of stuff? Is there even a reason to memorize anything? Having a lot of information in your head was maybe never cool in the sexy-cool sense, more in the geeky-cool or class-brainiac sense. But people respected the ability to rattle off the names of all the state capitals, or to recite the periodic table. It was like the ability to dunk, or to play the piano by ear—something the average person can’t do. It was a harmless show of superiority, and it gave people a kind of species pride.

There is still no artificial substitute for the ability to dunk. It remains a valued and nontransferable aptitude. But today who needs to know the capital of South Dakota or the atomic number of hafnium (Pierre and 72)? Siri, or whatever chatbot you use, can get you that information in nanoseconds. Remember when, back in the B.D.E. (Before the Digital Era), you’d be sitting around with friends over a bottle of Puligny-Montrachet, and the conversation would turn on the question of when Hegel published “The Phenomenology of Spirit”? Unless you had an encyclopedia for grownups around the house, you’d either have to trek to your local library, whose only copy of the “Phenomenology” was likely to be checked out, or use a primitive version of the “lifeline”—i.e., telephone a Hegel expert. Now you ask your smartphone, which is probably already in your hand. (I just did: 1807. Took less than a second.)

And names and dates are the least of it. Suppose, for example, that you suspected that one of your friends was misusing Hegel’s term “the cunning of reason.” So annoying. But you don’t even have to be sober to straighten that person out. As you contemplate another glass, Siri places in your hand a list of sites where that concept is explained, also in under a second. And, should the conversation ever get serious, Hegel’s entire corpus is searchable online. Interestingly, when I ask Siri, “Is Dick Van Dyke still alive?,” Siri says, “I won’t respond to that.” It’s not clear if that’s because of the Dick or the Dyke. (He is, and he’s ninety-four.)

There is also, of course, tons of instant information that is actually useful, like instructions for grilling corn on the cob, or unclogging a bathtub drain. And it’s free. You do not have to pay a plumber.

Leaving the irrefutably dire and dystopian effects of the Web aside for a moment, this is an amazing accomplishment. In less than twenty years, a huge percentage of the world’s knowledge has become accessible to anyone with a device that has Wi-Fi. Search engines work faster than the mind, and they are way more accurate. There is plenty of misinformation on the Web, but there is plenty of misinformation in your head, too. I just told you what the atomic number of hafnium is. Do you remember it correctly?

The most radical change that instant information has made is the levelling of content. There is no longer a distinction between things that everyone knows, or could readily know, and things that only experts know. “The cunning of reason” is as accessible as the date Hegel’s book was published and the best method for grilling corn. There is no such thing as esoterica anymore. We are all pedants now. Is this a cause for concern? Has it changed the economic and social value of knowledge? Has it put scholars and plumbers out of business and made expertise obsolete?

In the early years of the Web, the hub around which such questions circled was Wikipedia. The site will be twenty years old on January 15th, and a collection of articles by scholars, called “Wikipedia @ 20: Stories of an Incomplete Revolution” (M.I.T.), is being published as a kind of birthday tribute. The authors survey many aspects of the Wiki world, not always uncritically, but the consensus is that Wikipedia is the major success story of the Internet era. A ridiculously simple principle—“Anyone can edit”—has produced a more or less responsibly curated, perpetually up-to-date, and infinitely expandable source of information, almost all of it hyperlinked to multiple additional sources. Andrew Lih’s history of the site, “The Wikipedia Revolution: How a Bunch of Nobodies Created the World’s Greatest Encyclopedia,” published in 2009, is similarly smitten.

Wikipedia took off like a shot. Within a month, it had a thousand articles, a number that would have been impossible using a traditional editorial chain of command. Within three years, it had two hundred thousand articles, and it soon left print encyclopedias in the dust. Today, Wikipedia (according to Wikipedia) has more than fifty-five million articles in three hundred and thirteen languages. In 2020, it is the second most visited site on the Web in the United States, after YouTube, with 1.03 billion visits a month—over four hundred million more visits than the No. 3 Web site, Twitter. The Encyclopædia Britannica, first published in 1768 and for centuries the gold standard of the genre, had sixty-five thousand articles in its last print edition. Since 2012, new editions have been available only online, where it currently ranks fortieth in visits per month, with about thirty-two million.

In the beginning, the notion that you could create a reliable encyclopedia article about Hegel that was not written by, or at least edited by, a credentialled Hegel expert was received, understandably, with skepticism. Teachers treated Wikipedia like the study guide SparkNotes—a shortcut for homework shirkers, and a hodgepodge compiled by autodidacts and trivia buffs. The turning point is customarily said to have been a study published in Nature, in 2005, in which academic scientists compared forty-two science articles in Wikipedia and the Encyclopædia Britannica. The experts determined that Wikipedia averaged four errors per article and Britannica averaged three. “Wikipedia comes close to Britannica in terms of the accuracy of its scientific entries” was the editors’ conclusion. By then, many teachers were consulting Wikipedia regularly themselves.

The reason most people today who work in and on digital media have such warm feelings about Wikipedia may be that it’s one of the few surviving sites that adhere to the spirit of the early Internet, to what was known affectionately as the “hacker ethos.” This is the ethos of open-source, free-access software development. Anyone can get in the game, and a person doesn’t need permission to make changes. The prototypical open-source case is the operating system Linux, released in 1991, and much early programming was done in this communal barn-raising spirit. The vision, which now seems distinctly prelapsarian, was of the Web as a bottom-up phenomenon, with no bosses, and no rewards other than the satisfaction of participating in successful innovation.

Even today, no one is paid by Wikipedia, and anyone can (at least in theory, since a kind of editorial pecking order has evolved) change anything, with very few restrictions. In programming shop talk, all work on Wikipedia is “copyleft,” meaning that it can be used, modified, and distributed without permission. No one can claim a proprietary interest. There are scarcely any hard-and-fast rules for writing or editing a Wikipedia article.

That seems to have been what got hacker types, people typically allergic to being told what to do, interested in developing the site. “If rules make you nervous and depressed,” Larry Sanger, the site’s co-founder, with Jimmy Wales, wrote in the early days, “then ignore them and go about your business.”

Wikipedia is also one of the few popular sites whose content is not monetized and whose pages are not personalized. Nothing is behind a paywall; you do not have to log in. There are occasional pop-ups soliciting contributions (in 2017-18, almost a hundred million dollars was donated to the nonprofit Wikimedia Foundation, headed by Wales), but no one is trying to sell you something. Everyone who looks up Pierre, South Dakota, sees the same page. There is no age-and-gender-appropriate clickbait, no ads for drain de-cloggers and books by German philosophers.

Wikipedia has some principles, of course. Contributors are supposed to maintain a “neutral point of view”; everything must be verifiable and, preferably, given a citation; and—this is probably the key to the site’s success with scholars—there should be no original research. What this means is that Wikipedia is, in essence, an aggregator site. Already existing information is collected, usually from linkable sources, but it is not judged, interpreted, or, for the most part, contextualized. Unlike in scholarly writing, all sources tend to be treated equally. A peer-reviewed journal and a blog are cited without distinction. There is also a semi-official indifference to the quality of the writing. You do not read a Wikipedia article for the pleasures of its prose.

There are consequently very few restrictions on creating a page. The bar is set almost as low as it can be. You can’t post an article on your grandmother’s recipe for duck à l’orange. But there is an article on duck à l’orange. There are four hundred and seventy-two subway stations in New York City; each station has its own Wikipedia page. Many articles are basically vast dumping grounds of links, factoids, and data. Still, all this keeps the teachers and scholars in business, since knowledge isn’t the data. It’s what you do with the data. A quickie summary of “the cunning of reason” does not get you very far into Hegel.

But what about the folks who can recite the periodic table, or who know hundreds of lines of poetry “by heart,” or can tell you the capital of South Dakota right off the bat? Is long-term human memory obsolete? One indication of the answer might be that the highest-rated syndicated program on television for the first ten weeks of 2020 was “Jeopardy!” The ability to recall enormous numbers of facts is still obviously compelling. Geek-cool lives.

“Jeopardy!” is thirty-seven years old under its host Alex Trebek, who died earlier this month, at the age of eighty. But the show is much older than that. It first went on the air in 1964, hosted by Art Fleming, and ran until 1975. And the “Jeopardy!” genre, the game show, is much older still. Like a lot of early television—such as soap operas, news broadcasts, and variety shows—game shows date from radio. The three national broadcast networks—CBS, NBC, and ABC—were originally radio networks, so those were genres that programmers already knew.

Shows like “Jeopardy!” were as popular in the early years of television as they are today. In the 1955-56 season, the highest-rated show was “The $64,000 Question,” in which contestants won money by answering questions in different categories. Soon afterward, however, a meteor struck the game-show planet when it was discovered that Charles Van Doren, a contestant on another quiz show, “Twenty-One,” who had built up a huge following and whose face had been on the cover of Time, had been given the answers in advance. It turned out that most television quiz shows were rigged. The news was received as a scandal; there were congressional hearings, and the Communications Act was amended to make “secret assistance” to game-show contestants a federal crime.

Whom did such “assistance” help? Mostly, the networks. When a player is on a streak, audience size increases, because more and more people tune in each week to see if the streak will last. In the nineteen-fifties, there were usually just three shows to choose from in a given time slot, so audiences were enormous. As many as fifty-five million people—a third of the population—tuned in to “The $64,000 Question.” It was the equivalent of broadcasting the Super Bowl every week. The financial upside of a Van Doren was huge.

 

But the scandal made it clear that game shows are popular because they are also reality television. “Jeopardy!” and “The Apprentice” belong to the same genre. So, for that matter, does TikTok. The premise of reality television is that the contestants are ordinary people, not performers. This approach allows viewers to feel that they are matching wits with the people on the screen, but there is also something awe-inspiring about watching Charles Van Doren, or Ken Jennings, the owner of a six-month winning streak on “Jeopardy!,” run up the score. Still, you have to be able to believe that these people are not professionals, and that they are doing it without help.

In retrospect, the Van Doren fan-demic seems odd. He held advanced degrees and taught at Columbia; he was distinctly not the man on the street. It helped that he was young and good-looking, and that he really seemed to be sweating out the answers. One of the most popular “Jeopardy!” winners, on the other hand, is Frank Spangenberg, who for a long time held the record for five-day winnings ($102,597). Spangenberg was a member of the New York City Transit Police. He was the ideal game-show type, someone viewers can relate to.

As Claire McNear explains in “Answers in the Form of Questions: A Definitive History and Insider’s Guide to ‘Jeopardy!’ ” (Twelve), a book mainly for fans, the Van Doren scandal helped define “Jeopardy!” in two respects. The first is the concept for the show, which is credited to Julann Griffin, Merv Griffin’s wife. She is supposed to have argued that, if it was a crime to give quiz-show contestants the answers in advance, then giving them the answers up front and having them come up with the questions would get the show around the Communications Act. This nonsensical reasoning is repeated in virtually every book on the show.

The other piece of long-term fallout from the quiz-show scandals is that when contestants on “Jeopardy!” return home, and everyone asks them, “So what is Alex Trebek really like?,” they have no answer. This is because, except when the game is in progress, the contestants never interact with him. The policy is intended to insure that no contestant is getting off-camera help (which is also nonsensical, since contestants could be getting help from someone besides the host). But the lack of face time with Trebek is considered a major disappointment.

For Trebek was something between a cult figure and an icon. “Our generation’s Cronkite,” Ken Jennings called him in a column published last year, and the comparison is apt. Walter Cronkite did not report the news. He read cue cards on the air every week night on CBS for nineteen years. Trebek did not write the clues on “Jeopardy!” He read them on the morning of the taping, to make sure he had the pronunciations right. His aura of knowing the answers (or the questions) was, like Cronkite’s air of gravitas, part of the onscreen persona. Cronkite was trained as a journalist. He knew what was going on in the world and he understood the events he reported on. But that is not why he became an icon. Trebek, too, was an educated man with genuine curiosity and many interests. But it would not have mattered if he wasn’t. By some combination of familiarity and longevity, he and Cronkite acquired an outsized cultural status.

Like another TV icon, Johnny Carson, who hosted the “Tonight Show” for thirty years, Trebek’s great talent was for being supremely at ease in front of a camera. Whoever he was when he was at home, on the air he was himself. In thirty-seven years, he never missed a taping. When he was diagnosed with cancer, in March, 2019, he was seventy-eight years old. But he worked right up to the end. On days when he was undergoing treatment, he would be suffering terribly. Between games—“Jeopardy!” tapes five games a day, in Culver City, with fifteen-minute breaks—he sometimes writhed in agony on the floor of his dressing room. Fifteen minutes later, on the set and with the cameras rolling, he behaved as though he were perfectly healthy.

By his own account, offered in his brief and cheery memoir, “The Answer Is: Reflections on My Life” (Simon & Schuster), and confirmed by other reports, including McNear’s, when Trebek was off the air he was more laid-back and salty, less like your eighth-grade math teacher. But his tastes were conventional, and so was his career. He hosted numerous short-lived shows, in Canada, where he was born, and in the U.S., before getting the “Jeopardy!” gig. He did not think that the success of “Jeopardy!”—it ranked No. 1 or 2 among syndicated shows for many years—had anything to do with him. “You could replace me as the host of the show with anybody and it would likely be just as popular,” he says in the memoir. I guess we’ll see.

If there is a mystique about Trebek, one of the things we learn from McNear’s book is that there is also a mystique about the contestants. Today, many of them are not, in fact, ordinary people. They are trivia professionals, people who spend countless hours practicing and preparing. A major skill required on the show, for instance, is mastering the buzzer. Aspiring contestants now manufacture their own buzzers and practice to get reaction times, measured in milliseconds, as low as possible. (You cannot press your buzzer until the host has finished reading the answer; if you press it too early, there is a quarter-second wait before you can press it again, and by then the other contestants are likely to have pressed theirs.) Since contestants who make it onto the show typically know almost all the answers, the outcome tends to turn on who is the fastest buzzer-presser. “A reaction-time test tacked onto a trivia contest” is how one contestant described it.

Competing on “Jeopardy!” brings fame, and for most contestants being able to say that they played a game on the show is all the reward they require. But winning on “Jeopardy!” does not bring riches. In fact, to cast a cold economic eye on the show, “Jeopardy!” contestants constitute an exploited class. Together with its sibling show, “Wheel of Fortune,” another Merv Griffin creation, “Jeopardy!” is said to bring in a hundred and twenty-five million dollars a year. (Griffin wrote the “Jeopardy!” theme tune, and he claimed, before he died, in 2007, to have made more than seventy million dollars in royalties from it.) Trebek, who worked only forty-six days a year, was paid in the neighborhood of ten million dollars.

But contestants’ travel and hotel expenses are not paid, and the second- and third-place finishers do not keep the money they’ve “won”; they are given consolation prizes—two thousand dollars for second place and a thousand dollars for third—plus a tote bag and a “Jeopardy!” cap. (This is to incentivize riskier play.) According to McNear, in the 2017-18 season, the average amount that winners took home was $20,022. In his six-month streak, Jennings won $2.5 million, but during those six months ratings increased by fifty per cent over the previous year’s, and “Jeopardy!” became the second-ranked show on all television, after “CSI.” Two and a half million dollars was a very small price to pay. The riches of “Jeopardy!” are not necessarily what they seem. Other pockets got much fuller than Ken Jennings’s.

Something of the same could be said about Wikipedia’s reputation as a “free encyclopedia.” Yochai Benkler has a peculiar essay in the “Wikipedia @ 20” collection. (Benkler is the lead author of a recent study, widely reported, showing that right-wing media, like Fox and Breitbart, not trolls or Russian hackers, are responsible for most of the misinformation about “voter fraud.”) In his essay on Wikipedia, Benkler argues that the site is “a critical anchor for working alternatives to neoliberalism. . . . People can work together, build a shared identity in a community of practice, and make things they need without resorting to enforced market exchange.”

But that is not quite how Wikipedia works. A major influence on Jimmy Wales’s conception of the site was an essay by Friedrich Hayek called “The Use of Knowledge in Society,” published in 1945, and Hayek is virtually the father of postwar neoliberalism. His tract against planning, “The Road to Serfdom,” published in 1944, has sold hundreds of thousands of copies, and is still in print. Hayek’s argument about knowledge is the same as the neoliberal argument: markets are self-optimizing mechanisms. No one can know the totality of a given situation, as he puts it in “The Use of Knowledge” (he is talking about economic decision-making), but the optimal solution can be reached “by the interactions of people each of whom possesses only partial knowledge.”

This theory of knowledge is not unrelated to the wisdom-of-crowds scenario in which a group of people are guessing the number of jelly beans in a jar. The greater the number of guesses, the closer the mean of all guesses will come to the true number of jelly beans. A crucial part of crowdsourcing knowledge is not to exclude any guesses. This is why Wales, in his role as Wikipedia’s grand arbiter, is notoriously permissive about allowing access to the site’s editing function, and why he doesn’t care whether some of the editors are discovered to be impostors, people pretending to expertise that they don’t really have. For, when you are calculating the mean, the outliers are as important as the numbers that cluster around the average. The only way for the articles to be self-correcting is not to correct, to let the invisible hand do its job. Wikipedia is neoliberalism applied to knowledge.

Still, the people who post and who edit the articles on Wikipedia are not guessing jelly beans. They are culling knowledge that has already been paid for—by universities, by publishers, by think tanks and research institutes, by taxpayers. The editors at Nature who, back in 2005, compared Wikipedia with the Encyclopædia Britannica seem not to have considered whether one reason Wikipedia’s science entries had fewer errors than they expected was that its contributors could consult the Encyclopædia Britannica, which pays its contributors. There is no such thing as a free fact. ♦

Published in the print edition of the November 23, 2020, issue, with the headline “What Do You Know?.”

Louis Menand has been a staff writer at The New Yorker since 2001. He teaches at Harvard University.

 

Saturday, November 07, 2020

Trump Is Gone. Trumpism Just Arrived

 
Trump Is Gone. Trumpism Just Arrived

The air has been cleared. And democracy is working.

Andrew Sullivan Weekly Dish

The “wisdom of the American people” is a horrifying cliché, routinely hauled out every four years as pious pabulum by those whose candidate just won. But the complicated and close election results of 2020, in so far as we can understand them at this point in time, really do seem to capture where America now is, for good and ill, defying the caricatures and wishful thinking of both Republicans and Democrats, revealing a sanity that has helped keep me rather serene in this chaotic week.

The key fact is that Donald J Trump has been decisively defeated. He will be a one-term president. This was by no means inevitable. But in a massive turnout, where both sides mobilized unprecedented hordes of voters, and when the GOP actually made gains in the House, and did much better than expected, Trump lost. A critical mass of swing voters and moderate Republicans picked Biden over him. Our nightmare of four years — an unstable, malignant, delusional maniac at the center of our national life — is over. 

Take a moment to feel that relief. Breathe. Rejoice. He’s done.

He will not concede. He cannot concede — because he would suffer a psychic break if he did. And what we witnessed Thursday night, in his drained yet still despicable rant, was a picture of a sad, lost, delusional person, a man utterly unfit to hold the office he holds, lying and lying and lying, spinning paranoid conspiracy theories like a drunk on Tumblr. He said without any basis in fact: “This is a case where they’re trying to steal an election, they’re trying to rig an election, and we can’t let that happen.”

In the early hours of this morning, we got this tweet: “I easily WIN the Presidency of the United States with LEGAL VOTES CAST. The OBSERVERS were not allowed, in any way, shape, or form, to do their job and therefore, votes accepted during this period must be determined to be ILLEGAL VOTES. U.S. Supreme Court should decide!” This, to put it plainly, is a form of mental illness. The idea that election observers have been completely barred from doing their jobs in counts around the country is a fantasy. The notion that vast numbers of votes can suddenly be deemed “illegal” is also bonkers. Ditto the idea that counting votes after election day is somehow “finding” votes. These are absurd, delusional, desperate fantasies, made by someone who has no understanding of the word “responsibility”.

Donald Trump, in other words, is now showing exactly why he had to be defeated. Policy is irrelevant in his singular case. No serious democracy can have a delusional, utterly incompetent, psychologically disturbed madman as president and survive. 

But Trumpism? It did far better than anyone expected. Down-ballot, many Republicans out-performed their nominal leader. The GOP made real gains in the House — during a health crisis and a recession — and will probably hold the Senate, effectively checkmating any truly progressive ambitions Biden might have had. The rural turnout was spectacular, responding perhaps to Trump’s incredibly boisterous series of big rallies as the campaign came to a close. This was far from the Biden landslide I had been dreaming about a few weeks back. It was rather the moment that the American people surgically removed an unhinged leader and re-endorsed the gist of his politics. It was the moment that Trump’s core message was seared into one of our major political parties for the foreseeable future, and realigned American politics. If Trump were sane, this is how he would describe his success — and leave office graciously to become the kingmaker in his own party. But he is not sane.

His impact, however, is undeniable. Neoconservatism is over; globalization as some kind of conservative principle is over; a conservatism that allows for or looks away from unrestrained mass immigration is over. What was cemented in place this week is a new GOP, not unlike the new Tories in the UK. They’re nationalist, culturally conservative, geared toward the losers of capitalism as well as its winners, and mildly protectionist and isolationist. It is a natural response to the unintended consequences of neoliberalism’s success under a conservative banner. And it speaks in a language that working class Americans understand, devoid of the woke neologisms of the educated elite. It seems to me that this formula is a far more settled and electorally potent coalition than what we now see among the deeply divided Democrats.

And this is where I think I have been wrong about Trump’s appeal, and where I think I’ve misunderstood why otherwise decent people could support such a foul disrupter of democratic norms. Many of them simply didn’t take Trump’s threat to our system seriously. They took all his assaults on democracy as so much bluster from the kind of car salesman he is. They deal with this kind of bullshit all the time, took liberal democracy for granted and saw little reason to fret about its future. The writer Jamie Kirchick says that everything Trump says makes sense if it is preceded by the following words: “And now, Donnie from Queens, you’re on the air.” Many people heard Trump exactly that way, and couldn’t see what all the fuss was about. They weren’t endorsing his madness. They were looking past it. They were, in my opinion, wrong to be so cavalier. But I don’t think most were malignant extremists of any kind, or unaware of the hideous personal qualities of Trump.

And they enjoyed economic rewards that, absent the Covid19 recession, might well have swept Trump to victory. One of the more revealing results from the polls this year came in the answers to the core question made famous by Reagan: “Are you better off now than you were four years ago?” In previous campaigns to re-elect the president, Reagan was re-elected in a landslide with only 44 percent saying they were better off, George W. Bush won with 47 percent and Obama succeeded with 45 percent. For Trump, a mighty 56 percent said they were better off now than when he took office — a fundamental along with incumbency that should have led to a landslide re-election — and yet he still lost. That tells you something about Americans’ understanding of how unfit a president Trump turned out to be, even as they felt very good about their own wellbeing. 

And this was also clearly and unequivocally a rejection of the woke left. The riots of the summer turned many people off. In exit polls, 88 percent of Trump voters say it was a factor in their choice. On the question of policing and criminal justice, Trump led Biden 46 — 43 percent. For the past five years, Democrats have been telling us that Trump and his supporters were white supremacists, that he was indeed the “First White President” in Ta-Nehisi Coates’ words, that all minorities were under assault by the modern day equivalent of the KKK. And yet, the GOP got the highest proportion of the minority vote since 1960! No wonder Charles Blow’s head exploded.

We may find out more as exit polling is pored over, but in the current stats, Trump measurably increased his black, Latino, gay and Asian support. 12 percent of blacks — and 18 percent of black men — backed someone whom the left has identified as a “white supremacist”, and 32 percent of Latinos voted for the man who put immigrant children in cages, giving Trump Florida and Texas. 31 percent of Asians and 28 percent of the gay, lesbian and transgender population also went for Trump. The gay vote for Trump may have doubled! We’ll see if this pans out. But it’s an astonishing rebuke of identity politics and its crude assumptions about how unique individuals vote.

Why did minorities shift slightly rightward after enduring four years of Trump? First off, many obviously rejected the narrative being pushed out by every elite media source: that the core of Trump’s appeal was racism. They saw a more complicated picture. I suspect that many African-Americans, for example, were terrified of “defunding the police” and pleased to be economically better off, with record low unemployment before Covid19 hit. Many legal Latino citizens, perplexing leftists, do not want continued mass immigration, and are socially conservative. Asians increasingly see the woke as denying their children fair access to education, and many gays just vote on various different issues, now that the civil rights question has been largely resolved by the Supreme Court. 

Obviously a big majority of non-white and non-straight voters still backed Democrats. But the emergence of this coalition of minority conservatives is fascinating — and, of course, a complete refutation of what critical race theory tells us how minorities must feel. Ditto the gender gap. It’s there, but not quite the gulf we were led to believe. We have again been told insistently that being female in America today is a constant nightmare of oppression, harassment, violence and misogyny; and that no one represents this more potently than Donald “grab ‘em by the pussy” Trump. And yet white women still voted for Trump 55 to 43 percent. Among white women with no college education, arguably those most vulnerable to the predations of men, Trump got 60 percent support. This is not a wave of rage; and it suggests that the left’s notion of patriarchy is, in 2020, something many, many women just don’t buy, or do not believe should outweigh other, more important issues. 

And look at California, one of the most leftist states in the country, and minority-majority. The initiative to allow public institutions to discriminate openly on the basis of race — in order to favor some groups over others on the Ibram X. Kendi model — decisively failed, after months of unceasing propaganda about “white supremacy” and the need to counter it. So did an attempt to regulate the gig economy and to expand rent control. The appeal of assimilation and economic success among Latinos is not, pace the critical race theorists, an attempt to gain the advantages of “white-adjacency”. It’s simply the American way, paved by generations of immigrant groups before them. 

And it’s important to acknowledge that almost everyone in the elite and in the polling industry were misled by the polling. I was surprised at the resilience of Trump’s coalition in what was a huge turnout (breaking another paradigm of leftists, that somehow if more people voted, they would always gain). I was less surprised by the politics of Covid19. The idea that Trump’s manifest failures would translate to a major rebuke missed an underlying dynamic. Many Americans want to move on; they’re sick of the shutdowns and restrictions; and they’re more receptive to fake reassurance than we might imagine. Indeed, the counties with the biggest recent surges in Covid cases voted overwhelmingly for Trump. Democrats need to spend less time looking for exogenous factors — like demographic change, or the Covid epidemic — and more time making arguments that capture the country’s move toward more leftist economic policy, while jettisoning woke madness.

Pollsters, despite attempts to fix what they got so wrong last time, also missed a huge swathe of “shy Trump” voters. But why? Why did people not tell the truth to pollsters?Eric Kaufmann, one of the most astute political scientists writing today, notes that the segment of the Trump vote the polling missed was educated white voters. He suspects they were afraid to say out loud to pollsters how they were really going to vote. After all, “45% of Republicans with degrees, compared to 23% of Democrats with degrees, said they feared that their careers could be at risk if their views became known.”

So the polling got the less inhibited white non-college-educated Trump voters right, but the graduates very wrong: “The exit polls show that Trump ran even among white college graduates 49-49, and even had an edge among white female graduates of 50-49! This puts pre-election surveys out by a whopping 26-31 points among white graduates.” The threat of wokeness both alienated educated white voters — and caused more of them to vote Trump than anyone expected. The problem with woke media is that they mislead Democrats who then misread the country.

And that’s what I mean about the clearer air we breathe after surveying what Americans actually believe — in cold hard data. This mass secret vote revealed that the New York Times’ woke narrative of America — the centuries-long suffocating oppression of minorities and women by cis white straight men — is simply a niche elite belief, invented in a bubble academy, and imposed by bullying, shaming and if possible, firing dissenters. Some of us who refused to cower can gain real satisfaction from knowing we were not mad, not evil, not bigots, and that a huge swathe of our fellow citizens agree.

Recall also the huge money advantage the Democrats had in many Senate races they still lost. In California’s Prop 16 vote, for good measure, supporters of bringing back race discrimination by public authorities outspent opponents 14 -1! What this election shows is that leftists cannot bully voters into abandoning core principles of liberal democracy, and they can’t buy their submission either.

The clarifying truth is that we’re a very closely divided country, growing further apart culturally and socially, exploited by extremists on right and left, and yet still, fundamentally, sane. The American people do not want a revolution, but they also realized they do not want Trump as head of state. They removed the nutcase, defanged the woke, showed up to vote in vast numbers, and gave us a constellation of forces in Washington that pleases no one. And that’s ok. 

America is a vast and complicated place — and our representatives mirror that rather accurately, it seems. That’s democracy working, not failing. And in such a country, there is a place in the center for compromise if we can unwind the hysteria and polarization that the far left has fueled and Trump has exacerbated. We have now a president-elect with little personal ambition ahead of him, deep relationships with the Senate he will desperately need, elected by the sane center, with a check on his left flank. If we can get pragmatic and less inflamed, we can move forward. Biden won the primary thanks to moderate and realistic black voters; and he has won the election with a broad coalition. There are deals to be done. There is politics to engage. And we’ll get there if we can use this moment to listen to each other, especially those whose opinions we have spurned, and whose identities we have feared.  

But the maniac is gone. It will take some serious effort to keep him from inflicting terrible damage on his way out. But he is gone. The republic will survive, battered and bruised, but it will survive. And with a little grace from all of us, it can also begin to heal. 

 

Rosewood