Monday, March 23, 2015

Base Appeals
By David Remnick The New Yorker

For twenty years, many people in Israel and in the West have expressed the hope that Benjamin Netanyahu would prove to be the Richard Nixon of the State of Israel. Not the paranoid Nixon of the Watergate scandals or the embittered Nixon raving drunkenly at the White House portraits at four in the morning but the Nixon who yearned to enter the pantheon of statesmen, and who defied his Red-baiting past and initiated diplomatic relations with the People’s Republic of China. Wasn’t it possible that Netanyahu, whose political biography was steeped in the intransigent nationalism of the Revisionist movement, was just the right politician to make a lasting peace with the Palestinians?

It is amazing to recall how long this fantasy persisted. Even President Obama, whose relationship with Netanyahu is now poisoned by mistrust, once suspended disbelief. "There’s the famous example of Richard Nixon going to China," he said, in 2009. "A Democrat couldn’t have gone to China. A liberal couldn’t have gone to China. But a big anti-Communist like Richard Nixon could open that door. Now, it’s conceivable that Prime Minister Netanyahu can play that same role." Netanyahu, as he went on building settlements, deftly kept this illusion alive. In a speech six years ago, at Bar-Ilan University, and in comments as recently as last year, he spoke of his conditional support for "two states for two peoples."

In last week’s Israeli elections, Netanyahu did play the role of Nixon—except that he did not go to China. Nor did he go to Ramallah. He went racist. In 1968, Nixon spoke the coded language of states’ rights and law-and-order politics in order to heighten the fears of white voters in the South, who felt diminished and disempowered by the civil-rights movement and by the Democrat in the White House, Lyndon B. Johnson. Nixon’s swampy maneuvers helped defeat the Democrat Hubert Humphrey and secure the South as an electoral safe haven for more than forty years.

Netanyahu, a student—practically a member—of the G.O.P., is no beginner at this demagogic game. In 1995, as the leader of the opposition, he spoke at rallies where he questioned the Jewishness of Yitzhak Rabin’s attempt to make peace with the Palestinians through the Oslo Accords. This bit of code was not lost on the ultra-Orthodox or on the settlers. Netanyahu refused to rein in fanatics among his supporters who carried signs portraying Rabin as a Nazi or wearing, à la Arafat, a kaffiyeh.

Last week, Netanyahu, sensing an electoral threat from a center-left coalition led by Isaac Herzog and Tzipi Livni, unleashed a campaign finale steeped in nativist fear and hatred of the Other. This time, there was not a trace of subtlety. "Right-wing rule is in danger," he warned his supporters. "Arab voters are coming out in droves to the polls." On Israeli TV, he said, "If we don’t close the gap in the next few days, Herzog and Livni, supported by Arabs and leftist N.G.O.s, will form the next government." (Twenty per cent of the Israeli citizenry is Arab.) He warned darkly of "left-wing people from outside," including perfidious "Scandinavians," and "tens of millions of dollars" being used to "mobilize the Arab vote." Pro-Likud phone banks reminded voters that Netanyahu’s opponents had the support of "Hussein Obama."

The day before the election, Netanyahu made it clear that, after so many years of periodically flashing the Nixon-goes-to-China card to keep the center-left and the meddling "foreigners" at bay, he would play a new hand. "Whoever moves to establish a Palestinian state or intends to withdraw from territory is simply yielding territory for radical Islamic terrorist attacks against Israel," he said in an interview with NRG, a right-leaning Israeli news site. Pressed to say if this meant that he would never agree to a Palestinian state, he answered, "Indeed."

Netanyahu’s survival instincts are impressive. While he was arousing fear of Arabs and Scandinavians, he was relying on the support of an actual foreign patron, Sheldon Adelson, the American billionaire and casino operator. Adelson owns and publishes Israel Hayom (Israel Today), the country’s highest-circulation daily—a propaganda sheet whose sole purpose is to support the Prime Minister and Likud. Adelson is Netanyahu’s piggy bank and reflects a cruder version of his ideological impulses. Adelson has dismissed the Palestinians as "an invented people," and he doesn’t mind if Israel strays from democratic principles and norms: "I don’t think the Bible says anything about democracy." He was in a seat of honor and beaming with satisfaction when, three weeks ago, Netanyahu defied Obama and delivered his speech to Congress opposing a nuclear deal with Iran.

Now that he has been reëlected, Netanyahu has started to walk back his remarks, telling interviewers that he didn’t mean what he said about "droves" of Arabs, that he is all for a secure two-state solution. Nixon goes to China—again! But why should anyone believe it? Netanyahu’s victory—the way he achieved it and what it says about the politics of the Israeli majority—is clarifying. Josh Earnest, President Obama’s spokesman, said that the Administration was unmoved by Netanyahu’s post-election attempts to make nice, and declared that "the United States is in a position to reëvaluate our thinking." The Republicans’ position is clear—you get the sense that their congressional leadership would like to see Netanyahu go big-time and get on the ballot in Iowa and New Hampshire—but what about Hillary Clinton? Will she have the political courage to speak frankly and risk alienating some of her more conservative donors? The Palestinians, for their part, have every reason to believe that Netanyahu has shown both his hand and his heart; they will likely drop any thought of negotiations and take their campaign for statehood to the United Nations. For the first time, they may not face a reflexive veto from the United States.

Netanyahu, of course, does not view himself as Richard Nixon. In his imagination, he is Winston Churchill, the valorous protector of his nation, the singular leader of clear, unerring vision. Nearly two hundred former Israeli military and security chiefs, none of them naïve about the multiple dangers of the Middle East, have declared that further brinkmanship threatens the long-term stability of the nation. But Netanyahu is sure that he knows better. The tragedy is that the likely price of his vainglory is the increasing isolation of a country founded as a democratic refuge for a despised and decimated people. He will soon surpass David Ben-Gurion as Israel’s longest consecutively serving Prime Minister. Unfortunately, this has given Netanyahu plenty of time to erode the tone of his country’s political discourse. And so now, as he forms an unabashedly right-wing and religious government, he stands in opposition not only to the founding aspirations of his nation but also to those Israelis—Jews and Arabs—who stand for tolerance, equality, democratic ideals, and a just, secure peace. ♦

Sunday, March 22, 2015

The 100-year story of Miami Beach

By Andres Viglucci MIami Herald
In the beginning there was a slender sandspit of mangroves and swamp, mosquitoes and crocodiles, palmetto scrub and sea-stroked beach.
And Carl Fisher said, "let dry ground appear." So he spent a good part of his fortune to put fearsome machines to work for 15 years pumping up muck from Biscayne Bay, and it was so. Carl Fisher called the dry ground "Miami Beach" and saw it was good, and so did millions of people after him.
So thoroughly did founding father Fisher and his crews erase most traces of nature from what writer Polly Redford dubbed the Billion-Dollar Sandbar that it’s easy to forget today, as Miami Beach marks its centennial as an incorporated city in characteristically hyped-up fashion, just how completely a manufactured place it is.
Even the famed wide sandy beach is artificial, barged in from offshore in a latter-day echo of Fisher’s land-making. The one nature put there washed away years ago, its erosion accelerated by construction of the endless parade of hotels that made Miami Beach Miami Beach.
Yet as human inventions go, the Beach has been an outlandishly successful one, having turned itself in a century of compressed but eventful history from millionaire’s caprice to global darling after rising Lazarus-like from near death.
In its former peak, a period from from the late 1940s to the early ’60s in which it devised and perfected the modern resort hotel and mass tourism, Miami Beach was America’s Playground. It is now the world’s — a dynamic magnet for people and their money, a sparkling showcase of architecture old and new, an international shopping bazaar, a gourmandizer’s paradise and, much to the amazement of its natives, a cultural standard-bearer.
In short, Miami Beach on its 100th birthday has completed its most miraculous transformation: "It’s a real place," said Beach native Mitchell "Micky" Wolfson Jr., founder of its Wolfsonian-FIU museum, a gem in a crown that includes the New World Symphony and its Frank-Gehry designed home, Miami City Ballet’s headquarters, a reinvigorated and soon-to-expand Bass Museum, and the annual Art Basel/Miami Beach extravaganza.
"It’s not just the beach anymore. We have cultural stimulation, and the arts and gastronomy," Wolfson, 75, said. "No one could have believed or imagined the town would have become a full, year-round destination and a place where people are making their lives. It has gone from a small-town resort to a great city."


Sea rise threat

And, without question, one facing some very real challenges, the greatest being the rising seas that threaten to overwhelm the low-lying city within the lifetime of today’s young clubgoers. Nature’s revenge is forcing Miami Beach to reinvent itself once more as the city again finds itself at the forefront of something entirely new.
Mayor Philip Levine has convened a special committee to draw up a response plan to sea-level rise that looks at everything from raising streets and creating protective berms to building more pumps to draw water out, while raising buildings and their plumbing, electrical and mechanical equipment in the air to make them less vulnerable to flooding.
It might even require, some experts say, artificial beaches along the bayside that Fisher had built up, which lies lower than the natural beachfront, as a barrier against flooding. Fire up the dredges!
Then there are the consequences of untrammeled popularity on a narrow, built-out peninsula: the automobile traffic that chokes the city’s streets and causeways at all hours; the speculators demolishing the trove of unprotected 1920 Mediterranean manses that defined its neighborhoods, to be replaced by mega-mansions for mega-millionaires; and the increasing development pressures in historic commercial districts like Lincoln Road and Washington Avenue that preservationists fear could doom the human scale and unique sense of place they fought so hard to protect.
"It was the preservation movement that made this all happen, the realization of what a beautiful place it was and that the buildings were outstanding," said preservationist and former Beach commissioner Nancy Liebman. "But some people say we created a monster. Now we have people jumping in to take advantage of the success and the worldwide attention, and the sustainability of the city is questionable, between the flooding, the congestion and the overdevelopment."
Just look at the numbers compiled by the city: After losing population for years, the Beach has rebounded some of the way back to just over 90,000 full-time residents. Once you add in seasonal residents, tourists, day trippers and workers, the number of people in the city on any given day more than doubles to around 203,000 — nearly all of them hunting for a place to park.
That’s a problem many cities wish they had, Beach boosters note. But, to be sure, the noise and vitality of the Beach today is nothing like what Fisher, his contemporary J.N. Lummus or even Barbara Capitman, the preservationist who led the campaign in the ’70s and ’80s that saved South Beach and thus the city itself, ever envisioned, one historian says.
"Over 100 years, it’s gone from a swamp to a madhouse," Howard Kleinberg, author of Miami Beach: A History, said — only half in jest. "Now it’s going in another direction. It gets glitzier and glitzier every day.
"Any history written about Miami Beach should be placed in a loose-leaf binder, because it changes all the time. You would have to tear out the pages and start over again. Miami Beach is so temporary. Yet it’s still there. I don’t know how you go about explaining that."


Quakers hit the beach

And to think it all started with sober Quakers.
Around the turn of the last century, John Collins, a prosperous Quaker farmer from New Jersey, and his son-in-law Thomas Pancoast took over a failing coconut plantation that Henry Lum and his Quaker partners, Elnathan Field and Ezra Osborne, had established on the sandbar in the 1880s. It was the first attempt at building something permanent on the overlooked wilderness across Biscayne Bay from the fledgling city of Miami.
Collins and Pancoast converted the plantation to an avocado grove. When that venture stalled, they pivoted anew. By then entrepreneurs had built two bathing "casinos" on what is now South Beach, bringing in Miamians by ferry. Collins devised the idea of building a wooden bridge, the longest in the world, to connect to the mainland as a way to begin developing their land.
By 1912, with the bridge halfway done, they ran short of money and turned to Fisher, a manically energetic automobile entrepreneur and founder of the Indianapolis 500 race, who was wintering on Miami’s Brickell Avenue. Fisher put up $50,000 to finish the bridge, located where the Venetian Causeway stands today, and got 200 acres of their beach land. From that base, Fisher began building an empire of grand hotels and oceanfront estates, fueled by his millions, boundless ambition and a knack for promotional stunts that would grab America’s attention.
Fisher also built the Dixie Highway to bring people from Michigan to Miami Beach’s doorstep — but only the right people. Adhering to the legal and social discrimination of the day, none of the early Beach developers sold to blacks, and big "Gentiles Only" signs on the hotels and apartment houses kept Jews out. Only the Lummus brothers, who developed an area south and just north of Fifth Street, welcomed Jews.
Fisher made exceptions for wealthy Jews, some of whom were close friends.
"Fisher saw two kinds of Jews," Kleinberg said. "With Fisher it was money."
Development of the Beach exploded after the end of World War I, brought completion of the County Causeway (now the MacArthur) in 1920, the dawn of the Roaring ’20s and Miami and Miami Beach’s infamous boom. As America discovered Miami Beach, hotels would be jammed full as soon as they opened.
It wouldn’t last. Fisher lost his fortune on an ill-fated attempt to create another Miami Beach on Long Island, and then the 1926 hurricane put an abrupt stop to the boom. The subsequent stock market crash of 1929 had one beneficial effect: Restrictions on Jews began easing as developers became increasingly desperate for sales, though they would not be legally lifted until after World War II, and some persisted until a landmark 1959 Florida Supreme Court decision.
For blacks it was another matter. For years the city required all Beach hotel workers to carry I.D. cards, but black workers were subject to a curfew that required them to be off streets in white areas after dark. Locals still recall black workers walking across the Venetian Causeway at dusk on their way home to Overtown well into the 1960s. Even famed entertainers like Nat King Cole left the Beach after performing to sleep in Overtown and later Liberty City.
But Beach authorities were happy to overlook other rules. By the time Chicago mobster Al Capone came to town in 1928, in the middle of Prohibition, the Beach was a wide-open town for gambling and illegal booze. Fisher himself loaded his yachts with liquor in Bimini. Gangsters and bookies operated mostly unmolested until hearings by the Kefauver Commission in 1950 prompted a crackdown, scattering figures such as Meyer Lansky south to Havana.
The Beach did recover early from the Great Crash, however. Americans’ appetite for Beach getaways survived the Depression only somewhat diminished, leading to the construction in the 1930s of the hundreds of small yet distinctive Streamline Moderne apartment houses and hotels that today make up the city’s signature Art Deco District, the heart of South Beach.
Only the advent of World War II, and the merchant ships that burned and sank in full view of the beach after being torpedoed by German U-Boats in the Gulf Stream, stalled the city’s tourist trade. Blackouts were imposed on the Beach, and nearly every hotel room was turned over to the government to house thousands of military recruits brought to the Beach for training. Those recruits helped fuel a new boom when they came back for good to the seductive subtropics they had discovered in Miami and the Beach.
What was likely the Beach’s Golden Age arrived with the opening in 1948 of the Modernist Saxony Hotel by architect Roy France, followed a year later by the Sans Souci, started by France and finished by the architect who would indelibly stamp the Beach with his brash brand of extravagant Modernism, Morris Lapidus.
Air-conditioning, cheap airline packages and the age of mass travel again transformed the city. Now came ever-bigger, ever more lavish hotels like Lapidus’ Fontainebleau and Eden Roc, the nightclubs, Sinatra and the Rat Pack, and the Jackie Gleason Show. The Beach now sat squarely in America’s front lobe.
Away from the glitz and glamour, on the west side of Indian Creek and north of Dade Boulevard, and south of Fifth, was a different Beach — a close-knit residential community made up of people who worked in tourism, or lawyers and doctors, and centered in small-town fashion around schools, churches and synagogues. Those who grew up in it recall it with nostalgia and affection.
"Growing up on the Beach was wonderful, particularly when the tourists left and we had it all to ourselves," said JoAnn Bass, granddaughter of Joe Weiss, founder of Joe’s Stone Crab restaurant, which is the lone surviving link to the Beach’s pioneering days. "We were free to ride our bicycles up and down Lincoln Road. It was joy."
That the Beach by then had become a Jewish paradise might have shocked Fisher. Jews vacationed on the Beach, built its new hotels and stayed for good after retiring.
Then it all collapsed. Tourism dried up amid competition from new Caribbean resorts, the dawn of the ’60s and the coming of age of the Baby Boomers, who wouldn’t be caught dead in their parents’ kitschy Miami Beach. South Beach hotels turned into retirement homes for elderly Jews surviving on Social Security.
By 1970 the average age on the Beach was 62. Miami Beach became a cliche, a punchline to jokes about early-bird specials and "God’s Waiting Room." The Fountainebleau went into bankruptcy. In 1980, refugees from the Mariel boatlift, some of them hardened criminals, flooded South Beach. Everyone else avoided it. Crime soared. Ocean Drive and Lincoln Road were desolate. Many wrote the neighborhood off. City leaders wanted to tear it all down.


Deco movement to the rescue

Amid fierce resistance, Capitman led an improbably successful movement to preserve South Beach’s Deco district that slowly began attracting investors, artists, gays and a wildly popular TV show called Miami Vice. It made the city hip again, across the nation and around the world.
The relentless wave of renovations and ever-more deluxe additions the South Beach revival engendered has now spread south of Fifth Street, which the Lummus brothers would hardly recognize. An expanded Joe’s thrives amid the towers of the 1990s and new low-rise, glass-enclosed condos that sell for millions. The Beach’s very first hotel, the wood-framed Brown’s, has come back as Prime 112, where NBA stars sup on Kobe beef.
It’s also spreading north to sleepy mid-Collins. The long-shuttered Saxony, which launched the Golden Age, is coming back as part of Argentinian Alan Faena’s ultra-luxurious multi-block, mixed-use, arts-centered redevelopment. North of it the Fontainebleau, renovated to the tune of $1 billion, is jampacked again.
The elegant Mediterranean mansion that Micky Wolfson grew up in and lived in most of his adult life, and which he sold a decade ago, just fetched $22 million. He got less than half that. "That amused me," Wolfson said from Paris, where he now lives.
But as the Beach increasingly becomes a landing spot for billionaires and the merely very wealthy, some fear there will be no place on it for anyone else. The risk that success will spoil Miami Beach is real, they say.
"Was it ever realistic to think you could freeze it in amber?" said Neisen Kasdin, a Beach native, preservationist and former mayor who admires the sophisticated place his hometown has become. "You can’t stop the evolution. People are going to come. But my concern is there is no place for the middle class and the working professionals."
But still they keep on coming, the New Yorkers and the South Americans and the Europeans, the multimillionaires with the multimillion-dollar pied-a-terres and the college kids looking for some action. And no one thinks it’s going to stop anytime soon.
"I don’t think Miami Beach will disappear into the dust," Kleinberg said. "It’s going to continue to change. I can’t envision what it will be 100 years from now. All these predictions have it under water by then, of course.
"But just not tomorrow."



Wednesday, March 11, 2015


Nicholas Carr Rough Type (Blog)

 

An earlier version of this essay appeared last year, under the headline "The Manipulators," in the Los Angeles Review of Books.

Since the launch of Netscape and Yahoo twenty years ago, the story of the internet has been one of new companies and new products, a story shaped largely by the interests of entrepreneurs and venture capitalists. The plot has been linear; the pace, relentless. In 1995 came Amazon and Craigslist; in 1997, Google and Netflix; in 1999, Napster and Blogger; in 2001, iTunes; in 2003, MySpace; in 2004, Facebook; in 2005, YouTube; in 2006, Twitter; in 2007, the iPhone and the Kindle; in 2008, Airbnb; in 2010, Instagram and Uber; in 2011, Snapchat; in 2012, Coursera; in 2013, Tinder. It has been a carnival ride, and we, the public, have been the giddy passengers.

The story may be changing now. Though the current remains swift, eddies are appearing in the stream. Last year, the big news about the net came not in the form of buzzy startups or cool gadgets, but in the shape of two dry, arcane documents. One was a scientific paper describing an experiment in which researchers attempted to alter the moods of Facebook users by secretly manipulating the messages they saw. The other was a ruling by the European Union’s highest court granting citizens the right to have outdated or inaccurate information about them erased from Google and other search engines. Both documents provoked consternation, anger, and argument. Both raised important, complicated issues without resolving them. Arriving in the wake of Edward Snowden’s revelations about the NSA’s online spying operation, both seemed to herald, in very different ways, a new stage in the net’s history — one in which the public will be called upon to guide the technology, rather than the other way around. We may look back on 2014 as the year the internet began to grow up.

* * *

The Facebook study seemed fated to stir up controversy. Its title read like a bulletin from a dystopian future: Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. But when, on June 2, 2014, the article first appeared on the website of the Proceedings of the National Academy of Sciences (PNAS), it drew little notice or comment. It sank quietly into the vast swamp of academic publishing. That changed abruptly three weeks later, on June 26, when technology reporter Aviva Rutkin posted a brief account of the study on the website of New Scientist magazine. She noted that the research had been run by a Facebook employee, a social psychologist named Adam Kramer who worked in the firm’s large Data Science unit, and that it had involved more than half a million members of the social network. Smelling a scandal, other journalists rushed to the PNAS site to give the paper a read. They discovered that Facebook had not bothered to inform its members about their participation in the experiment, much less ask their consent.

Outrage ensued, as the story pinballed through the media. "If you were still unsure how much contempt Facebook has for its users," declared the technology news site PandoDaily, "this will make everything hideously clear." A New York Times writer accused Facebook of treating people like "lab rats," while The Washington Post, in an editorial, criticized the study for "cross[ing] an ethical line." US Senator Mark Warner called on the Federal Trade Commission to investigate the matter, and at least two European governments opened probes. The response from social media was furious. "Get off Facebook," tweeted Erin Kissane, an editor at a software site. "If you work there, quit. They’re fucking awful." Writing on Google Plus, the privacy activist Lauren Weinstein wondered whether Facebook "KILLED anyone with their emotion manipulation stunt."

The ethical concerns were justified. Although Facebook, as a private company, is not bound by the informed-consent guidelines of universities and government agencies, its decision to carry out psychological research on people without telling them was at best rash and at worst reprehensible. It violated the US Department of Health & Human Services’ policy for the protection of human research subjects (known as the "Common Rule") as well as the ethics code of the American Psychological Association. Making the transgression all the more inexcusable was the company’s failure to exclude minors from the test group. The fact that the manipulation of information was carried out by the researchers’ computers rather than by the researchers themselves — a detail that Facebook offered in its defense — was beside the point. As University of Maryland law professor James Grimmelmann observed, psychological manipulation remains psychological manipulation "even when it’s carried out automatically."

Still, the intensity of the reaction seemed incommensurate with its object. Once you got past the dubious ethics and the alarming title, the study turned out to be a meager piece of work. Earlier psychological research had suggested that moods, like sneezes, could be contagious. If you hang out with sad people, you’ll probably end up feeling a little blue yourself. Kramer and his collaborators (the paper was coauthored by two Cornell scientists) wanted to see if such emotional contagion might also be spread through online social networks. During a week in January 2012, they programmed Facebook’s News Feed algorithm — the program that selects which messages to funnel onto a member’s home page and which to omit — to make slight adjustments in the "emotional content" of the feeds delivered to a random sample of members. One group of test subjects saw a slightly higher number of "positive" messages than normal, while another group saw slightly more "negative" messages. To categorize messages as positive or negative, the researchers used a standard text-analysis program, called Linguistic Inquiry and Word Count, that spots words expressing emotions in written works. They then evaluated each subject’s subsequent Facebook posts to see whether the emotional content of the messages had been influenced by the alterations in the News Feed.

The researchers did discover an influence. People exposed to more negative words went on to use more negative words than would have been expected, while people exposed to more positive words used more of the same — but the effect was vanishingly small, measurable only in a tiny fraction of a percentage point. If the effect had been any more trifling, it would have been undetectable. As Kramer later explained, in a contrite Facebook post, "the actual impact on people in the experiment was the minimal amount to statistically detect it — the result was that people produced an average of one fewer emotional word, per thousand words, over the following week." As contagions go, that’s a pretty feeble one. It seems unlikely that any participant in the study suffered the slightest bit of harm. As Kramer admitted, "the research benefits of the paper may not have justified all of this anxiety."

* * *

What was most worrisome about the study lay not in its design or its findings, but in its ordinariness. As Facebook made clear in its official responses to the controversy, Kramer’s experiment was just the visible tip of an enormous and otherwise well-concealed iceberg. In an email to the press, a company spokesperson said the PNAS study was part of the continuing research Facebook does to understand "how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow." Sheryl Sandberg, the company’s chief operating officer, reinforced that message in a press conference: "This was part of ongoing research companies do to test different products, and that was what it was." The only problem with the study, she went on, was that it "was poorly communicated." A former member of Facebook’s Data Science unit, Andrew Ledvina, told The Wall Street Journal that the in-house lab operates with few restrictions. "Anyone on that team could run a test," he said. "They’re always trying to alter people’s behavior."

Businesses have been trying to alter people’s behavior for as long as businesses have been around. Marketing departments and advertising agencies are experts at formulating, testing, and disseminating images and words that provoke emotional responses, shape attitudes, and trigger purchases. From the apple-cheeked Ivory Snow baby to the chiseled Marlboro man to the moon-eyed Cialis couple, we have for decades been bombarded by messages intended to influence our feelings. The Facebook study is part of that venerable tradition, a fact that the few intrepid folks who came forward to defend the experiment often emphasized. "We are being manipulated without our knowledge or consent all the time — by advertisers, marketers, politicians — and we all just accept that as a part of life," argued Duncan Watts, a researcher who studies online behavior for Microsoft. "Marketing as a whole is designed to manipulate emotions," said Nicholas Christakis, a Yale sociologist who has used Facebook data in his own research.

The "everybody does it" excuse is rarely convincing, and in this case it’s specious. Thanks to the reach of the internet, the kind of psychological and behavioral testing that Facebook does is different in both scale and kind from the market research of the past. Never before have companies been able to gather such intimate data on people’s thoughts and lives, and never before have they been able to so broadly and minutely shape the information that people see. If the Post Office had ever disclosed that it was reading everyone’s mail and choosing which letters to deliver and which not to, people would have been apoplectic, yet that is essentially what Facebook has been doing. In formulating the algorithms that run its News Feed and other media services, it molds what its billion-plus members see and then tracks their responses. It uses the resulting data to further adjust its algorithms, and the cycle of experiments begins anew. Because the algorithms are secret, people have no idea which of their buttons are being pushed — or when, or why.

Facebook is hardly unique. Pretty much every internet company performs extensive experiments on its users, trying to figure out, among other things, how to increase the time they spend using an app or a site, or how to increase the likelihood they will click on an advertisement or a link. Much of this research is innocuous. Google once tested 41 different shades of blue on a web-page toolbar to determine which color would produce the most clicks. But not all of it is innocuous. You don’t have to be paranoid to conclude that the PNAS test was far from the most manipulative of the experiments going on behind the scenes at internet companies. You only have to be sensible.

That became clear, in the midst of the Facebook controversy, when another popular web operation, the matchmaking site OKCupid, disclosed that it routinely conducts psychological research in which it doctors the information it provides to its love-seeking clientele. It has, for instance, done experiments in which it altered people’s profile pictures and descriptions. It has even circulated false "compatibility ratings" to see what happens when ill-matched strangers believe they’ll be well-matched couples. OKCupid was not exactly contrite about abusing its customers’ trust. "Guess what, everybody," blogged the company’s cofounder, Christian Rudder: "if you use the internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work."

The problem with manipulation is that it hems us in. It weakens our volition and circumscribes our will, substituting the intentions of others for our own. When efforts to manipulate us are hidden from us, the likelihood that we’ll fall victim to them grows. Other than the dim or gullible, most people in the past understood that corporate marketing tactics, from advertisements to celebrity endorsements to package designs, were intended to be manipulative. As long as those tactics were visible, we could evaluate them and resist them — maybe even make jokes about them. That’s no longer the case, at least not when it comes to online services. When companies wield moment-by-moment control over the flow of personal correspondence and other intimate or sensitive information, tweaking it in ways that are concealed from us, we’re unable to discern, much less evaluate, the manipulative acts. We find ourselves inside a black box.

* * *

Put yourself in the shoes of Mario Costeja González. In 1998, the Spaniard ran into a little financial difficulty. He had defaulted on a debt, and to pay it off he was forced to put some real estate up for auction. The sale was duly noted in the venerable Barcelona newspaper La Vanguardia. The matter settled, Costeja González went on with his life as a graphologist, an interpreter of handwriting. The debt and the auction, as well as the 36-word press notice about them, faded from public memory. The bruise healed.

But then, in 2009, nearly a dozen years later, the episode sprang back to life. La Vanguardia put its archives online, Google’s web-crawling "bot" sniffed out the old article about the auction, the article was automatically added to the search engine’s database, and a link to it began popping into prominent view whenever someone in Spain did a search on Costeja’s name. Costeja was dismayed. It seemed unfair to have his reputation sullied by an out-of-context report on an old personal problem that had long ago been resolved. Presented without explanation in search results, the article made him look like a deadbeat. He felt, as he would later explain, that his dignity was at stake.

Costeja lodged a formal complaint with the Spanish government’s data-protection agency. He asked the regulators to order La Vanguardia to remove the article from its website and to order Google to stop linking to the notice in its search results. The agency refused to act on the newspaper request, citing the legality of the article’s original publication, but it agreed with Costeja about the unfairness of the Google listing. It told the company to remove the auction story from its results. Appalled, Google appealed the decision, arguing that in listing the story it was merely highlighting information published elsewhere. The dispute quickly made its way to the Court of Justice of the European Union in Luxembourg, where it became known as the "right to be forgotten" case. On May 13 of 2014, the high court issued its decision. Siding with Costeja and the Spanish data-protection agency, the justices ruled that Google was obligated to obey the order and remove the La Vanguardia piece from its search results. The upshot: European citizens suddenly had the right to get certain unflattering information about them deleted from search engines.

Most Americans, and quite a few Europeans, were flabbergasted by the decision. They saw it not only as unworkable (how can a global search engine processing some six billion searches a day be expected to evaluate the personal grouses of individuals?), but also as a threat to the free flow of information online. Many accused the court of licensing censorship or even of creating "memory holes" in history.

But the heated reactions, however understandable, were off the mark. They reflected a misinterpretation of the decision. The court had not established a "right to be forgotten." That essentially metaphorical phrase is mentioned only in passing in the ruling, and its attachment to the case has proven a distraction. In an open society, where freedom of thought and speech are protected, where people’s thoughts and words are their own, a right to be forgotten is as untenable as a right to be remembered. What the case was really about was an individual’s right not to be systematically misrepresented. But even putting the decision into those more modest terms is misleading. It implies that the court’s ruling was broader than it actually was.

The essential issue the justices were called upon to address was how, if at all, a 1995 European Union policy on the processing of personal data, the so-called Data Protection Directive, applied to companies that, like Google, engage in the large-scale aggregation of information online. The directive had been enacted to ease the cross-border exchange of data, while also establishing privacy and other protections for citizens. "Whereas data-processing systems are designed to serve man," the policy reads, "they must, whatever the nationality or residence of natural persons, respect their fundamental rights and freedoms, notably the right to privacy, and contribute to economic and social progress, trade expansion and the well-being of individuals." To shield people from abusive or unjust treatment, the directive imposed strict regulations on businesses and other organizations that act as "controllers" of the processing of personal information. It required, among other things, that any data disseminated by such controllers be not only accurate and up-to-date, but fair, relevant, and "not excessive in relation to the purposes for which they are collected and/or further processed." What the directive left unclear was whether companies that aggregated information produced by others — companies like Google and Facebook — fell into the category of controllers. That was what the court had to decide.

Search engines, social networks, and other online aggregators have always presented themselves as playing a neutral and essentially passive role when it comes to the processing of information. They’re not creating the content they distribute — that’s done by publishers in the case of search engines, or by individual members in the case of social networks. Rather, they’re simply gathering the information and arranging it in a useful form. This view, tirelessly promoted by Google — and used by the company as a defense in the Costeja case — has been embraced by much of the public. It has become the default view. When Wikipedia cofounder Jimmy Wales, in criticizing the European court’s decision, said, "Google just helps us to find the things that are online," he was not only mouthing the company line; he was expressing the popular conception of information aggregators.

The court took a different view. Online aggregation is not a neutral act, it ruled, but a transformative one. In collecting, organizing, and ranking information, a search engine is creating something new: a distinctive and influential product that reflects the company’s own editorial intentions and judgments, as expressed through its information-processing algorithms. "The processing of personal data carried out in the context of the activity of a search engine can be distinguished from and is additional to that carried out by publishers of websites," the justices wrote. "Inasmuch as the activity of a search engine is therefore liable to affect significantly […] the fundamental rights to privacy and to the protection of personal data, the operator of the search engine as the person determining the purposes and means of that activity must ensure, within the framework of its responsibilities, powers and capabilities, that the activity meets the requirements of [the Data Protection Directive] in order that the guarantees laid down by the directive may have full effect."

The European court did not pass judgment on the guarantees established by the Data Protection Directive, nor on any other existing or prospective laws or policies pertaining to the processing of personal information. It did not tell society how to assess or regulate the activities of aggregators like Google or Facebook. It did not even offer an opinion as to the process companies or lawmakers should use in deciding which personal information warranted exclusion from search results — an undertaking every bit as thorny as it’s been made out to be. What the justices did, with perspicuity and prudence, was provide us with a way to think rationally about the algorithmic manipulation of digital information and the social responsibilities it entails. The interests of a powerful international company like Google, a company that provides an indispensable service to many people, do not automatically trump the interests of a lone individual. When it comes to the operation of search engines and other information aggregators, fairness is at least as important as expedience.

Ten months have passed since the court’s ruling, and we now know that the judgment is not going to "break the internet," as was widely predicted when it was issued. The web still works. Google has a process in place for adjudicating requests for the removal of personal information — it accepts about forty percent of them — just as it has a process in place for adjudicating requests to remove copyrighted information. Last month, Google’s Advisory Council on the Right to Be Forgotten issued a report that put the ruling and the company’s response into context. "In fact," the council wrote, "the Ruling does not establish a general Right to to Be Forgotten. Implementation of the Ruling does not have the effect of ‘forgetting’ information about a data subject. Instead, it requires Google to remove links returned in search results based on an individual’s name when those results are ‘inadequate, irrelevant or no longer relevant, or excessive.’ Google is not required to remove those results if there is an overriding public interest in them ‘for particular reasons, such as the role played by the data subject in public life.'" It is possible, in other words, to strike a reasonable balance between an individual’s interests, the interests of the public in finding information quickly, and the commercial interests of internet companies.

* * *

We have had a hard time thinking clearly about companies like Google and Facebook because we have never before had to deal with companies like Google and Facebook. They are something new in the world, and they don’t fit neatly into our existing legal and cultural templates. Because they operate at such unimaginable magnitude, carrying out millions of informational transactions every second, we’ve tended to think of them as vast, faceless, dispassionate computers — as information-processing machines that exist outside the realm of human intention and control. That’s a misperception, and a dangerous one.

Modern computers and computer networks enable human judgment to be automated, to be exercised on a vast scale and at a breathtaking pace. But it’s still human judgment. Algorithms are constructed by people, and they reflect the interests, biases, and flaws of their makers. As Google’s founders themselves pointed out many years ago, an information aggregator operated for commercial gain will inevitably be compromised and should always be treated with suspicion. That is certainly true of a search engine that mediates our intellectual explorations; it is even more true of a social network that mediates our personal associations and conversations.

Because algorithms impose on us the interests and biases of others, we have a right and an obligation to carefully examine and, when appropriate, judiciously regulate those algorithms. We have a right and an obligation to understand how we, and our information, are being manipulated. To ignore that responsibility, or to shirk it because it raises hard problems, is to grant a small group of people — the kind of people who carried out the Facebook and OKCupid experiments — the power to play with us at their whim.

Tuesday, March 10, 2015


Our Times: Looking at the United States in 2015


 

David A Fairbanks

From the first day of the first arrival to the New World in the 15th century the central theme has been a battle between those who profess a ‘faith inspired’ rural vision of society and those seeking ‘cosmopolitan’ or ‘urbane’ vision of society. 

History is littered with examples of this clash of ideals. 

Roman civilization finally foundered on this issue. Traditionally urbane and secular, Rome tolerated religion or faith as a ‘comfort’ issue and as long as religion respected the ‘temporal’ authority of the Roman Emperor everyone was safe.  

Christianity proved to be the bane of Roman civilization and ultimately unintentionally contributed to the demise of secular rule, and its ability to engage real politick and thus have a rationalist understanding of the world. 

The United States suffers the same tragic conflict and in time may well falter on this issue. 

While the north and eventually the west embrace secularism in politics, much of the south has defined itself by terms of Christian faith, especially Calvinism, (The elect and the rest of us) and the latter day teaching of John Nelson Darby and his dispensationalist theology, (salvation through deeds and supremacy of moral values) which gained almost universal acceptance in rural America. 

Country vs. City is an ancient issue and it thrives today. 

After twenty years of humiliation at the polls (1932 1952) the Republican Party resorted to a time-honored tactic of appealing to everyone resentful of urbane Democrats and the secularist style of Franklin Roosevelt and particularly Jack Kennedy and later Lyndon Johnson.  

Richard Nixon’s ‘Southern Strategy’ a willful commitment to ‘slowing’ eventual integration of the races achieved spectacular success in the south and Rural America. 

Everyone involved understood that raw segregation and violent enforcement of ‘Jim Crow’ was not acceptable anymore; softer nuance discrimination was acceptable. Bigotry in a velvet glove was the future. 

Ronald Reagan, no bigot and known for taking Hollywood away from social prejudices when he was with the Screen Actors Guild was the perfect politician for velvet glove implementation of change. 

With the coming of Barack Obama the ultimate ‘urbane secularist’ president it was certain to arouse resentment and create a very real ‘race panic’ and it did. 

The Republicans rightfully fear Mr. Obama because he is proof traditional race politics is doomed as the country moves further away from ‘faith culture’ and into a still fuzzy undefined ‘internet’ culture and further globalization. 

Republicans have yet to define themselves in a recognizable 21st century persona. They only have the last vestiges of Nixon and Reagan politics, GHW Bush and his son GW Bush have been discredited for unrelated issues. 

Barack Obama’s genuine intelligence, urban personality and rejection of the ‘guns and religion’ culture make him a marked man. To accept him on any level however beneficial to the country is anathema to the Republicans because their base is not yet ready. The south and rural Midwest are in drastic economic transition and are still coping with modern media culture. Resentment and dread are the fuel of GOP politics in much of the South and any politician who ignores the intensity of these sentiments risk electoral ruin. 

Very few Southerners have any working understanding of the ‘Old South’ aristocracy or any specific interest in the ‘Plant agent’ theology of the 19th century.

But they are proud of who they think they are and resentful of outsiders who snicker at them. The south will never let go its mythologies and the North and West must not forget this.

Few Americans actually believe in Race superiority anymore. But evidence shows as it has for a century that economic instability and high unemployment exacerbate racial animus and force politicians to acknowledge these fears. 

Anxiety over race has flared up in recent years in direct proportion to economic stress and a general malaise in the national political culture. 

Whites in America are immediately challenged by a president whom by nature is a ‘passive academic’ eschewing the aggressive ‘John Wayne’ stoic mentality that prevailed for decades. 

The republicans know they must change their public posture and accept 21st century culture as it defines itself, but at present they must sustain their base and not loose elections. This is a huge challenge and fraught with danger. 

Hillary Clinton has the unhappy task of offering the country a ‘post Obama’ presidency and thus far has not done very well. She dose not yet have a credible strategy and her public appearance regrettably show an older woman a refugee from the 1990’s 

Only Governor Scott Walker of Wisconsin appears to have a consistent and believable agenda and a public face recognizable buy traditionalists and the conservative faith community. Visible scorn for him by New York based media and the secular north and west give him credibility in the south and Midwest. He is seen as a sincere person of faith willing to defeat unionist and ‘liberal’ urbanites that stand too close to Blacks and ‘Black Urban Culture.’ 

Racism has become a hot topic in recent years and will continue to do so until the employment picture among blue-collar workers stabilizes. 

Fear of Islamic terrorism has also infected politics and both Republicans and Democrats have yet to develop a credible response. 


Regrettably it is unlikely Hillary Clinton will reach the presidency. Political culture moves forward almost never backwards. The 1990s are over and it may well be that 2008 was Mrs. Clinton’s one real chance. Without her the Democrats face an electoral defeat in 2016. The Republicans may win, but will face a loss in 2020 unless they can garner support from independent moderates and so called Reagan Democrats. The GOP President must immediately broaden his base and not arouse urban secularist fears. The Democrats can look to 2020 as a classic ‘progress’ election, branding the current Republican as a needless reactionary ‘throwback’ and offer a positive vision of the 2020s.     

The battle between the moralist and the secularist persists and will for a long time. Americans have never embraced either viewpoint in full. Secularism can never be an absolute because it threatens heritage beliefs and accepted ‘faith’ wisdom. Faith culture or moralist society should never be absolute because it tends toward orthodoxy and violent response to unexpected change. Absolutism is self-defeating, every generation must have the freedom to define itself and respond to contemporary circumstance. Economic conditions always determine the emotional character of a nation.   

The United States has suffered a number of hysterias when the political culture became rigid and unable to change fast enough. 

2016 will be a bitter and contentious election as both Urbanites and Moralists seek control over the future. 

I personally stand with the secularist urbanites. 

David A Fairbanks

Reno Nevada
3/10/2015
J.P. O’Malley  The Daily Beast

 



The Civil War is Not Just For Americans

Most Americans think of the Civil War as a series of battles that mattered most to the U.S. A new book shows that it also mattered a great deal worldwide.


Of all the conflicts that have raged in American history the Civil War remains the bloodiest.

On battlefields such as Shiloh, Antietam, and Gettysburg, the death toll averaged out to 425 men per day. This continued for 1,458 consecutive days, leaving an estimated 620,000 dead when the last shots were fired. But this casualty rate applied to today’s population would stand at around 6 million: making the body count proportionately far greater than the number of fatalities the United States experienced during World Wars I and II combined.

The Civil War left an enormous imprint on the American consciousness in much the same way as World War I did on the European mindset. For both wars, the notion of remembrance is sacrosanct.

But if the Great War is spoken about in terms of regret, failure, and unnecessary loss of human life—where soldiers died for nothing more than violence for the sake of violence—the Civil War, in American culture at least, is seen as a necessary struggle, one that finally solidified the ideas that the Founding Fathers had laid more than 80 years previously when they launched a republic. Put simply, the Civil War is seen as the American Revolution part two.

In recent years the enormous scale of destruction has been the focus of fascinating texts.
This Republic of Suffering: Death and the American Civil War by Drew Gilpin Faust is one such example. Many scholars had previously believed that a new phase of violence—in which technological advances made it increasingly possible to slaughter large numbers of people at a time—only became possible during World War I.


But Faust argues that, relative to scale, the Civil War was as violent as anything that followed in the 20th century.

During the Civil War 3½ million men bore arms. This made up almost the entire population of those who were of military age in both the South and North.

The scales of the armies were enormous, too: in a single battle there might be 100,000 men on each side, and casualty rates ran as high as 20 to 25 percent. Cities were razed. Thousands of prisoners of war starved to death. And many were simply shot and left to die on the roadside.
These thinkers, writers and journalists saw the Civil War as far more than just internal strife between the Confederacy and the Union. They viewed it instead as an epic showdown between democracy and aristocracy.

In The American Civil War, John Keegan pays close attention to what can only be described as the sensual elements of horror: delineating how hundreds of thousands of men living in the Gilded Age—despite trying to put the memories of the war to bed—could never forget the horrors of dismembered bodies, decapitations, and the files of corpses ranging so close in roadways or trenches that stepping on them was often unavoidable.

As a result, the Civil War is largely viewed in the minds of Americans in terms of the American experience. It was a war fought on U.S. soil, by U.S. citizens, for the future of the U.S. Yet, the Civil War still remains the only large-scale conflict ever fought between citizens of the same democratic state. What about its impact on the rest of the world? According to John Keegan, "In Europe, the military significance of the war [though] it was the costliest of the nineteenth century, was largely ignored."
This is a point I suspect Don H. Doyle, a professor of history at the University of South Carolina, would profoundly disagree with. In fact, the underlying argument of his new book, The Cause of All Nations: an International History of the American Civil War, takes the opposite view entirely, arguing that, contrary to conventional historical wisdom, the conflict mattered a great deal to Europe and the world at large. Doyle’s well researched, evenly balanced, if slightly over-optimistic narrative, takes us through the trajectory of the intellectual and diplomatic international debate that continually evolved as each stage of the Civil War progressed.


Until now, this is an area of Civil War scholarship that has largely been neglected. Doyle’s re-evaluation of the subject is an enormously important contribution to a story that cannot be forgotten, which asks: Where does the American Civil War fit into a grander narrative about universal human freedom —and progressive enlightenment values—in a global context during the 19th century?

Doyle turns his attention predominately to the public debate that was happening in Europe by prominent intellectuals of the day. These thinkers, writers and journalists saw the Civil War as far more than just internal strife between the Confederacy and the Union. They viewed it instead as an epic showdown between democracy and aristocracy. It was a matter of free versus slave labour, where the winners would decide how the capitalist world would progress in tandem with modernity.

Before 1860 the United States had offered aspiring republicans around the globe a template for how a free, self-governing nation might live in peace and prosperity. And America, according to Doyle, though it was far from perfect, and had many flaws— not least because slavery at that stage was still legal in many states—thus automatically became, in many European minds, a model country to aspire to when thinking about progressive ideas such as liberty, equality, and self-rule. And with the Civil War, the U.S. seemed to offer to the rest of the world a literal battle between those values and rights.

The phrase public diplomacy may not have become an official term in the popular press until World War I. But it was during the Civil War that deliberate, state-sponsored programs began attempting to influence the public mind abroad about American foreign policy.

Just one week after Abraham Lincoln assumed office on March 4 1861, he sent his Secretary of State, William Seward, a memo suggesting how he might fill what they anticipated to be four key diplomatic posts. Britain and France were crucial. As two leading naval powers in the world, both were heavily dependent on cotton from the South for their textile industries. Spain, despite being a feeble power, had colonies in Cuba and Puerto Rico, and thus remained a dangerous potential ally of the South. Mexico, meanwhile, remained crucial because of its seaports on the Gulf.

Seward’s initial foreign policy message to the rest of the world contained a firm warning: Any gesture of support to the South that could potentially weaken the Union’s position would be considered an act of war.

The Union won the Civil War, Doyle explains, by executing both soft and hard elements of diplomacy with finesse and brilliant strategic thinking.

Doyle’s greatest asset, as both a historian and writer, is his ability to patiently tell this story with color, verve, and flair, while also weighing in with his own expertise and commentary at crucial periods of the narrative. He explains how the American Civil War is often viewed as a military contest that was decided by major key battles. But propaganda and diplomacy would be equally as important as bombs and bullets in determining which side emerged victorious.

For the first crucial months of the conflict, the Confederacy was able to set the terms of the debate by emphasizing its desire for national determination. Thus the story of free trade, and not slavery, became the narrative with which the South would attempt to legitimize its cause: desperately hoping to win the hearts and minds of the chattering classes back in Europe.

The conflict, they told the wider world, was about the industrial North pushing an issue of protective tariffs, while the agrarian South wanted free trade with the Old World in Europe. To begin with, this seemed like a convincing argument. And momentarily, it looked to most observers that the South would win legal legitimacy as a respected nation of the world in due course.

But both sides, Doyle reminds us, began the conflict denying that slavery was the fundamental issue at stake.

While the U.S. president and commander in chief of the Union army, Abraham Lincoln, may have always held a deep antipathy to slavery, and even abhorred it on a personal level, it was not convenient for him to express those opinions in public when the war broke out.

And so in his first inaugural address on March 4 1861, Lincoln stated that he had no intention to interfere with slavery in those states where it already existed. Such a confusing moral position from the American president left many foreigner intellectuals and thinkers—who feature prominently in this book—with a number of key questions. They began to ask: Was this simply a civil war with a small domestic dispute about tariffs and territory? Or was there, behind the diplomatic quarrelling and posturing, a noble issue at stake that really did concern the whole of humanity?

However, the South’s fundamental principles regarding slavery had already been set in stone after Alexander Stephens, the vice president of the new nation, addressed the issue in his famous Cornerstone speech in Savannah, Georgia, on March 21 1861. There he openly admitted that slavery was to be tantamount to the Confederacy’s ideology and economic position.

Lincoln during the early stage of the war was careful to eschew any passionate pleas about human freedom. Instead, he concentrated on ideas such as universal law, the Constitution, and the power of the Union.

But if thousands of soldiers marched to Washington in the spring of 1861 to save the Union, what exactly did the concept they were fighting for mean to each of them, either collectively, or an individual basis?

Hugh Brogan in the Penguin History of the USA claims that "this crucial question is seldom asked by American historians, and never answered satisfactorily."

Attempting to properly develop a feasible answer to this complex, but extremely necessary question, is one of the stronger characteristics that Doyle’s narrative possesses.

And, as his thesis continually points out, the more intriguing answers actually came from foreigners, many who had never set foot in America themselves.

Karl Marx, who was living as an exile in London at the time, wrote that "the struggle between the South and North is one concerned with the system of slavery and the system of free labor. It can only be ended by the victory of one system over the other."

While the French intellectual Agénor de Gasparin was the first notable European to publicly declare that, whatever Americans proclaimed about the Civil War, at the heart of it was the greatest moral issue of the 19th century: slavery.

Other prominent pro-Union voices from abroad included John Bright, a British Quaker reformer, and Édouard de Laboulaye, an outspoken French republican.

For these writers and thinkers—whose wide-ranging political opinions fluctuated from far-left radical utopian thinking, to a more centered worldview that respected constitutional monarchies—America, and indeed the Civil War, embodied something greater than just a geographical landscape or territorial squabble. It was an opportunity, Doyle argues, to prominently declare, in an age of revolution, that democracy and the rule of law were concepts worth fighting and dying for.

Eventually, in the summer of 1862, partly due to pressure from a diverse range of liberal foreigners who expected America to fight a war of liberty, Lincoln concluded that he must act against slavery to legitimize the Union cause. But it would be as commander of chief of the Union, and not as chief executive, Doyle points out, that Lincoln would proclaim emancipation.

Even Marx referred to Lincoln’s September emancipation decree as "the most important document in American history since the establishment of the Union."

Doyle concludes his thesis with great flair and vigor in his penultimate chapter by giving the reader a number of excellent examples about how Lincoln cleverly used the written word to his advantage by speaking about the Civil War in universal terms.

This enabled the American president to frame the war not just as a showdown between the Union and the Confederacy, but as a trial of democracy that had immense consequence for the world’s future.

At his Gettysburg Address in November 1863, Lincoln used the simple, but extremely effective phrase "any nation so conceived": thus imbuing America’s war not just with his fellow citizens, but for the entire of mankind.

Lincoln, like Winston Churchill during the Second World War in Britain, had an exceptional ability to craft speeches that crystallized his political rhetoric with a particular style of literary prose that was simultaneously charming, inspiring, heroic, and noble. But we also need to be careful to distinguish between the emotive connotations a politician’s words carry, and their actual significance in the world of Realpolitik. The old cliché that actions speak louder than words seems like the appropriate phrase to pay attention to here.

Indeed, it is true that the Union’s triumph in the war sent an optimistic message to all radical reformers and keen democrats on both sides of the Atlantic at the time. Had the Confederacy triumphed, it might have meant, as Doyle suggests, a new birth of slavery, possibly throughout the Americas.

But despite Doyle’s book ending on an optimistic high, readers should be warned to treat his narrative with just a slight dose of skepticism.

Anyone looking for a more conclusive analysis of how Lincoln’s emancipation act actually played out for blacks in the United States after the war ended will really need to look elsewhere for answers.

Doyle does give a brief mention at the start of the book about why the Civil Rights movement in the 1960s follows a direct trajectory from Civil War politics. But not much else follows the one sentence he dedicates to this subject. His failure to explore this in any detail whatsoever left me feeling slightly short-changed and disappointed, particularly considering how well the author firmly cements his argument up until this point of the book.

Moreover, it’s common knowledge that blacks obtained far less economic power after emancipation, in the Reconstruction period, than Lincoln had initially foreseen.

As John Keegan—who has a slightly less optimistic view on the Reconstruction period than Doyle—correctly points out in his book The American Civil War:

The South had been beaten but had not been fundamentally changed. Anti-black feeling was a universal emotion and state localism was more powerful than loyalty to the Union. Almost none of the former Confederate States were under the government of men who accepted Congress’s desire for equality and the untrammelled rule of law.

Most readers won’t need reminding just how horrific race relations played out for blacks in the United States in the 90-odd years from the Reconstruction Era to the Civil Rights movement; Jim Crow laws were simply a way of life in Southern states. And a subtle government-legislated system of race-based residential zoning ensured that American cities were continually drawn firmly along lines of color, especially in places like Chicago, New York, Miami, and Los Angeles.

If Doyle’s book suffers from one minor flaw it is this: His thesis is severely restricted by his own myopic and optimistic view of American history.

For example, he claims that:

In the mid-nineteenth century, it appeared to many that the world was moving away from democracy and equality toward repressive government and the expansion of slavery. Far from being pushed off the world’s stage by human progress, slavery, aristocratic rule and imperialism seemed to be finding a new life and aggressive new defenders.

Doyle seems to be suggesting here that in the aftermath of the American Civil War a political culture emerged where this all changed: whereby democratic values spread across the Atlantic to the imperial powers back in the Old World.

Clearly this was not the case though. Imperialism was alive and kicking for the duration of the 19th century, throughout the Western World. And it got progressively worse. Any suggestion that the outcome of the American Civil War abetted this seems to me slightly naïve to say the least.

At the Berlin Conference of 1884/85—the entire African continent was shared piecemeal amongst the imperial powers of Europe. If this isn’t a sign that imperialism was on the rise in the Western World, I really don’t know what is.

It’s also worth paying close attention to the common mythology of American democracy, versus the kind of government that was actually envisioned when the republic was founded.

America’s Civil War, Doyle tells us in the introductory chapter, "lies at the heart of the story Americans tells themselves about themselves." And he concludes his book with a rather simplistic narrative that claims the conflict "shook the Atlantic world and decided the fate of slavery and democracy for the vast future that lay ahead."

But if we want to comprehend American history, in all of its complexity, we really do need to steer clear of this feel-good-narrative, and take a more critical approach.

In his book The Democracy Project, the American anthropologist and radical thinker David Graeber attempts to dissect and analyse how the myth of democracy has firmly maintained political hegemony in the United States for over two centuries now.

Graeber claims that neither the Declaration of Independence, nor the American Constitution, embody the democratic values that we are still today led to believe they do in popular political discourse. In fact, the model for the Constitution, says Graeber, was based on an autocratic form of government that dates back to antiquity: the Roman Republic.

The Founding Fathers of the United States were very clear about what they were trying to achieve when they founded a Republic: setting up a democratic element of government along with aristocratic and monarchical principles. Where the president is a monarch, and the Senate is the aristocracy.

Without understanding these basic fundamental principles that the United States was founded on, there is a danger of getting swept along into a tornado of American history that blinds and distorts. Scholars such as Doyle tend to get caught up in this without even consciously realizing it.

It would be vituperative, incorrect, and naïve to deny the importance of the main argument that Doyle presents here, for the most part with extreme clarity, precision and skill: just what a Union win in the Civil War meant for the future of democracy in an international context during the middle of the 19th century.

But unless we attempt to figure out exactly what democracy entails—does it simply mean freedom for a white-privileged-property-owning-elite?—then continually celebrating its cause may be a futile and self-defeating task.

In an excellent collection of essays, published three years ago, entitled the Short American Century—which for the most part are critical, rather than celebratory of American democracy over the 20th century—Andrew J. Bacevich, the book’s editor, speaks in the opening pages about why the task of critically assessing a considerable swath of U.S. history should not be to prop up American self-esteem.

Bacevich reminds us that "before history can teach, it must challenge and even discomfit."

If Western historians aren’t able to face up to the grave prejudices that are contained in numerous epochs of American history, or to continually ask the question—just what is it American democracy has bequeathed to its citizens since the foundation of the United States?— grand metaphoric and poetic descriptions about a shining city resting upon a hill of divine global exceptionalism will remain nothing more than empty mawkish, sentimental, and jingoistic drivel.




Rosewood