Friday, March 30, 2007

Anna Nicole Smith
by William Carl Ferleman
At this peculiar point in history there’s no shortage of faith in American culture. Currently the United States is one of the most religious nations in the world. Sam Harris, in his Letter to a Christian Nation writes, “44 percent of the American population is convinced that Jesus will return to judge the living and the dead sometime in the next 50 years”.
Of course, there are alternative representations of American culture to consider: hamburgers, apple pie, baseball, and blunt, blonde, buxom bombshells—like Anna Nicole Smith, who recently died, dispirited and isolated, in a South Florida hotel room. Her prettified face became ubiquitous in American TV news, which reported the death of a tragic beauty, the death of a tabloid star, the death of a devoted mother, and asked a series of pressing questions: Was she plastered when she died? How did she die? Who fathered her young daughter? Who will acquire the insuperable inheritance money? What role did her attorney and assumed lover Howard K. Stern play? How did her son, Danny, die? Does anyone really care? Did anyone know Smith in anything but the proverbially biblical sense?
In the days after Smith’s death, the media flashed a particular picture of her repeatedly: She wore dark, formal attire and had a somber, determined look on her face and a not-so-modest cross around her neck. In death, Smith had seemingly become dignified. Undoubtedly, Smith’s death was not the most consequential bit of news; the media had more vital topics to broach, most notably the ceaseless war in Iraq, which now has claimed more than 3,000 American lives and uncounted Iraqi civilians. Also, to provide a sense of perspective, the death around the same time of journalist Molly Ivins, another Texan, received far less media coverage, but then, Ivins teased verbally rather than physically, and that’s clearly not hot.
What did Smith accomplish to warrant such attention? The media treated Smith as if she were some prominent stateswoman or maverick intellectual—or even a celebrity in the same hemisphere as Marilyn Monroe, the star-crossed sex symbol whom she desperately emulated. But Smith was hardly Monroe—she was hardly even Jenna Jameson. So how could she garner so much postmortem attention?
While much about Smith’s celebrity value seems self-evident, an aura of ambiguity surrounds her true appeal to the public. Smith first gained notoriety in the US by being named Playboy Playmate of the Year in 1993. Thanks to Hugh Hefner, she was particularly celebrated for her keen, native ability to make love to the camera—one of the few traits she did share with Norma Jean. She subsequently became a model for Guess jeans and starred in a few incontestably irrelevant films—Naked Gun 33 1/3: The Final Insult, for instance.
Smith’s tempestuous, controversial personal life, however, quickly began to take the spotlight over her desultory career. While in her mid-20s she married octogenarian billionaire J. Howard Marshall, a Texas oil baron. The media, either reflecting or pandering to the public, would soon label her a rapacious gold digger. Moreover, the tabloids reported her so-termed indulgent, hedonistic lifestyle, her drug and alcohol use and overdoses, and the fluctuations in her already “voluptuous” figure. At this point, Smith’s career stagnated, if it wasn’t over completely: She remained famous simply because she already was famous.
In 2002 Smith starred in her own reality show, The Anna Nicole Show, presumably in a last ditch attempt to revive her career. If she was hoping to reassert control over her public persona, a reality show may not have been the best decision: The show made her appear a vain, ditsy, childish, overweight, washed-up model, and the public found no reason to doubt that this was indeed her true character in life. Until her death, the media drummed up its own inflated, bizarre version of Smith, which the public seemed entirely content with. She became the entire world’s ecstatic in-joke. Cintra Wilson, writing in Salon, points out that the media perceived Smith as “their very own generational whipping blonde.” (
“Anna Nicole”, Salon.com.) Smith became more notorious than famous.
While Smith lived, she came across as a type of Eve figure, an alluring female transgressor, a Mary Magdalene, a woman with “loose morals”. The public, always in need of real people to associate with mythical archetypes, was content to preserve her celebrity for that reason. After Smith passed away, the media seized upon the opportunity to dress her up like a dignified woman to again regale the public with the lavish ritual of desacralizing and humiliating her, again. The media felt compelled to reflect the public’s harsh pity and concentrated abomination for its own disparaging idea of Smith: gold digger. The process of manipulating Smith’s image took place to allow for public expiation and amusement. This paradoxical blend of delight and spite had to be refashioned in the media. Any reminder of the cruel treatment Smith had received had to be eliminated, first.
Within a week of Smith’s death, several articles masquerading as tributary and celebratory opinion pieces were published, but these articles only ostensibly glamorized and idealized Smith: the true aim was to gorge like a vulture on the beautiful remains of Smith. Philip Kennicott’s Washington Post article refers to Smith unashamedly as “poor Anna” and “pathetic creature”. He lionizes Smith, casting her as our era’s last invaluable courtesan. Cintra Wilson’s Salon article repeats the perfunctory connection between Smith and Marilyn Monroe, though Smith never really attained that level of cultural importance. Nevertheless, Wilson suggests that Smith’s cultural resonance was possibly even greater than Monroe’s, calling Smith “not so much a candle in the wind as a bonfire in a hailstorm.” This faux glorification of Smith seems closely linked to America’s palpable religiosity. American religious culture is unique in the sense that it closely allies itself with celebrity and media culture. Americans bring a pronounced religious fervor to seemingly secular enterprises such as music and film, and the religiosity of American culture plays out as mass media: The TV becomes something comparable to a church or pulpit. The media fascination with Anna Nicole Smith can be understood in this context.
For example, the tasteless decision to play and replay a video of Smith—dolled up like a clown—while she was disoriented and pregnant functions as a moralistic, judgmental sermon: Do not take drugs while being pregnant; this woman is contemptible. Celebrities like Smith spawn believers, and believers are constantly on the watch for new goddesses or gods. The people need someone to love and to loathe, to praise and to denounce. But above all, it appears that the people need someone to assess for precious moral reasons, all while being perversely stimulated at the same time. Any appeal Smith may have had to the religiously inclined American public largely stemmed from her commonness, which her rare beauty never quite extinguished. A small-town Texan at heart, she married young, stripped for an income, and worked at one time for Wal-Mart, that truly emblematic working-class fixture. Her sudden rise to popularity had overtones of the Horatio Alger rags-to-riches myth, and capitalism’s promises of upward mobility. But the faithful, charitable, and hopeful public finally denounced and condemned her. The public’s moral values and religiosity would permit a beautiful, humble, fallen woman to bestride the world, but not an incoherent gold digger.
Judged as immoral, Smith soon was regarded as inhuman. No more a common woman who found success, she was a vain, greedy woman; no longer cute and teasing, she was a fat, bumbling personification of mortal absurdity. Smith became the unrepentant, unregenerate Mary Magdalene in the public’s psyche. Smith’s own moral fanatic mother denounced her in the early ‘90s, oddly forecasting the public’s later disdain.
Anna Nicole Smith’s death may have led to a sense of guilt en masse. Consequently, the media raised the final illusion of all: that Smith was not really repudiated by the public for her ludicrous, infamous waywardness, but rather, that she was noble, tragic, pathetic, devoted mother, tragic beauty, America’s Rose, Shakespeare’s Ophelia, Marilyn Monroe, clown, the last courtesan heroine, or any other crude, false conception of Smith. Did not Anna become Maria?
But what does it matter who Smith was? Her birth name was Vickie Lynn Hogan. The beauty that made her famous and cherished eventually led to her final damnation in the fickle public’s mind. In death, she was less a human being mourned than a media myth still being manipulated, appearing humble, dignified, glamorous and contemptible yet again, as we exercised our emotions accordingly. It was more worthwhile to sustain our numerous fantasies about her than to extend to her the common courtesy we typically reserve for the dead. That’s why it might have mattered who Anna Nicole Smith really was—someone we’ll never know, except as a colossal, American sideshow.

Thursday, March 29, 2007

In 1984 it was Senator Gary Hart, In 1992 it was Ross Perot, in 2000 it was Ralph Nadar. The "Dream Candidate" or "A new Daddy to solve all our vexing problems!
Thompson is a rational adult, the G. O. P. will never accept him because he'd ruin their fantasies!
The GOP's Therapy CandidateThe trouble with Fred Thompson.By John DickersonPosted Wednesday, March 28, 2007, at 6:58 PM ET
When Newt Gingrich wanted to dismiss Barack Obama, he did it in a phrase. Obama would make a great president, said the former House speaker, "if the country wants therapy." Like the claim that John Kerry "looks French" from the 2004 election, this quip works on many levels for GOP voters. It refuses to treat Obama seriously and paints his supporters as frail, emotional, and needy. It also reasserts a broader claim about the difference between the two parties: Republicans are adults focused on serious issues; Democrats engage in sentimental swooning that will get us all killed at night in our beds.
Given the self-image of conservatives, it's a little surprising, then, that so many are excited about Fred Thompson, a candidate whose chief qualification seems to be that he makes them feel good. The former Tennessee senator has less experience than all the other top GOP contenders and yet he is being talked about as the savior for a party that is
unhappy with its current crop of candidates and its chances in 2008. Thompson has not entered the race, but in television appearances two weeks ago, he hinted that he might.
This flirtation has ignited talk,
Web sites, and a draft movement led by former Tennessee Sens. Howard Baker and Bill Frist. In a recent Gallup poll, Thompson shot into third place ahead of Mitt Romney; he's done the same in polls in the early caucus state of Iowa. Romney, who looks even more like an actor playing a politician, must be depressed that Thompson has so quickly overtaken him, since the former Massachusetts governor spent a great deal of time gaining experiences and building a résumé that might actually be useful for a president.
Thompson's chief appeal is emotional. Until now, many conservative Republicans have had to wince when they thought of their plausible presidential choices. Giuliani is too liberal, McCain is too unpredictable and too well-liked by the media, and Romney seems like a flip-flopper on the issues they care about. The possibility of a Thompson candidacy excites the Republicans I talk to. He's an "outsider"—having left Washington for
Law and Order before the Beltway rot set in. He's a good communicator, which means he can sell conservative policies and has the star power to battle Hillary or Obama. Though he hasn't been through the press-vetting process, his voting record and talk-radio performances suggest he holds conservative enough positions. Oh, and he can raise Hollywood cash.
Authenticity and star power conjure visions of Ronald Reagan. But Reagan had genuine experience running something—namely the state of California. Thompson's résumé is thin—an undistinguished eight years in the Senate, an
acting career, and a youthful turn as co-counsel in the Watergate hearings. Supporters try to pump up his résumé by boasting that he shepherded John Roberts through his confirmation hearings—but that was the legal equivalent of walking Michael Jordan onto the court.
What's most puzzling is that Thompson is liked by Republicans who say the war on terror is the single most important issue facing the country. They claim they understand the reality of the threats we face and that Democrats don't. And yet Thompson's security résumé is puny compared to his potential rivals. He has no executive experience and the wars he's fought have all been in the movies. Sure, you can argue that experience is overrated—after all, Dick Cheney and Donald Rumsfeld had plenty of it. The problem is that Thompson's supporters like Cheney and Rummy.
The myth behind the Thompson quasi-candidacy is a dangerous one that bedevils both parties: If we just get a better communicator, people will love our policies. But once Thompson enters the race, he will have to either embrace or distance himself from GOP policies, which will either ruin his chances in the general election or hurt him with his conservative supporters. In short, he'll become just like any other candidate—something he might not like after such a big buildup. Thompson also has a reputation for not enjoying the grind of campaigning.
The blows are already coming. Conservative radio host James Dobson
has announced he doesn't think Thompson is sufficiently Christian because he doesn't speak openly and loudly about his faith. Dobson prefers Newt Gingrich, who went on Dobson's show and confessed to a series of moral lapses in an exchange that sounded a lot like therapy.John Dickerson is Slate's chief political correspondent and author of On Her Trail. He can be reached at slatepolitics@gmail.com.

Wednesday, March 28, 2007


David Hicks: A young man in search of a cause
By Raymond Bonner
LONDON: David Hicks's journey through the post-Sept. 11 military legal labyrinth has sometimes seemed as uncertain and tortuous as his life's journey, from high school dropout and kangaroo skinner to a young man in search of a war and a cause.
Before he was captured in Afghanistan in December 2001, Hicks, then 25, could not seem to figure out what to do with his life.
How Hicks ended up with the Taliban is a mystery to his father, Terry, a printer, and his stepmother, Beverley, a house cleaner. "He was always in search of something, but God knows what," Hicks said in an interview four years ago.
After Hicks was sent to the military prison at Guantánamo Bay, Cuba, the Bush administration had trouble figuring out what to do with him.
It pleaded with Australia, his home country, to take him, but Australia said he had not violated any of its laws and would be set free; the United States did not want that.
Charges were filed, then thrown out after a Supreme Court decision in another case; new charges were filed, then reduced. Hicks and his lawyers rejected offers to plead guilty in exchange for a sentence of 10 years, to be served in Australia.
Eventually, domestic politics in Australia played a role in Hicks's being brought before the military tribunal Monday, where he pleaded guilty.
For nearly five years, the Australian government, under Prime Minister John Howard, was generally content that Hicks remain at Guantánamo. But last year, public opinion in Australia shifted sharply.
As part of the plea arrangement, Hicks is expected to be allowed to serve any additional prison sentence in Australia, said an Australian official familiar with the deal.
But several Australian lawyers said that on his return home he was likely to challenge his plea on the grounds that it was not voluntary, or that it was invalid because "material support for terrorism" was not a crime in 2001.
David Hicks was expelled from school when he was 14, heavy into drinking and drugs. He held various jobs, including skinning kangaroos at a wmeatpacking factory, and though he was only 5-foot-5, or 1.65 meters, and weighed less than 115 pounds, or 52 kilograms, he played Australian Rules Football.
He headed to the vast Australian Outback. For weeks at a time, he would be alone with his horse and a bedroll, which may have given him the mental stamina to survive isolation at Guantánamo, his father once said.
He met an aboriginal woman and settled down for a while; they had a boy and girl before David was 20.
After five years they separated, and David hit the road again.
First he went to Japan, where he trained horses. One day, in late 1998, he called his parents and said he was going to join the Kosovo Liberation Army. He said he had learned about the war in Bosnia and Kosovo from watching Japanese television. He found out how to join the guerrilla army through the Internet.
In joining the Kosovo Liberation Army, Hicks was on the side of the United States and NATO, against Slobodan Milosevic, the president of Serbia.
The war ended before he saw combat.
Back home, in Adelaide, he tried to join the Australian Army, but was rejected. He attended Bible study classes at an Assemblies of God church, but did not find whatever spiritual meaning he was searching for. He turned to Islam.
Now he has abandoned that, according to an affidavit he filed in legal proceedings here, to obtain British citizenship.
In November 1999, Hicks left for Pakistan, wearing lime green baggy trousers, a long, loose-fitting shirt, a white knit cap and sandals. He had two goals - to learn more about the Koran and to travel the Silk Road on horseback - or so he told his parents.
In Pakistan, he linked up with Lashkar-e-Taiba, a guerrilla organization founded, financed and trained by the Pakistani intelligence service to fight in Kashmir. (It has since been outlawed by the Pakistani government).
The guerrilla fighters trained at Al Qaeda camps in Afghanistan, and Lashkar-e-Taiba arranged for Hicks to go there, according to the U.S. charges. The charges say Hicks attended various training camps and pledged fealty to Osama bin Laden.
Following Hicks's plea Monday, a senator from his home state, South Australia, Natasha Stott Despoja, said he would return "as a guilty man who has not had a fair trial."
"The Australian public will see through this process," Despoja added.

Saturday, March 24, 2007

March 25, 2007
First Chapter ‘Rumsfeld’
By ANDREW COCKBURN
Just after 9:37 A.M. on the morning of September 11, 2001, Officer Aubrey Davis of the Pentagon police was standing outside Donald Rumsfeld's office on the third floor of the Pentagon's E Ring. Inside, Rumsfeld, though aware that the World Trade Center towers in New York had already been hit, was proceeding with his regularly scheduled CIA briefing. Davis, on the other hand, had concluded from watching the TV news that the country was under attack and the Pentagon might be a target. Assigned to the defense secretary's personal bodyguard, he had come on his own initiative, ready to move Rumsfeld to a better-protected location.
"There was an incredibly loud 'boom,'" says Davis, raising his voice slightly on the last word. Fifteen or twenty seconds later, just as his radio crackled with a message, the door opened and Rumsfeld walked out, looking composed and wearing the jacket he normally discarded while in his office. "Sir," said Davis, quoting what he had heard on his radio, "we're getting a report that an airplane has hit the Mall."
"The Mall?" replied Rumsfeld calmly. Without further word, the secretary of defense turned on his heel and set off at a sharp pace toward the so-called Mall section of the Pentagon. Down the hall, someone ran out of a VIP dining room screaming, "They're bombing the building, they're bombing the building." Davis frantically waved for colleagues to catch up as the stocky, 5' 8" defense secretary marched ahead of his lanky escort.
The group, which grew to include several more police officers as well as Rumsfeld's personal communications aide, turned into the wide passageway running along the Mall face of the building. Thick crowds of Pentagon staff, in and out of uniform, were hurrying past in the opposite direction. They could smell smoke, but there was no sign of any damage here. "I thought you said the Mall," said Rumsfeld.
"Sir," responded Davis, holding his radio, "now we're hearing it's by the heliport." This meant the next side of the building farther along from the Mall. Rumsfeld set off again without a word, ignoring Davis's protestations that they should turn back. "At the end of the Mall corridor, we dropped down a stairway to the second floor, and then a little farther we dropped down to the first. It was dark and there was a lot of smoke. Then we saw daylight through a door that was hanging open." Groping through the darkness to the door, the group emerged outside. In front of them, just thirty yards away, roared a "wall of flame."
"There were the flames, and bits of metal all around," Davis remembers, as well as injured people. He noticed the white legs of a woman lying on the ground, then realized with a shock that she was African-American, horribly burned. "The secretary picked up one of the pieces of metal. I was telling him he shouldn't be interfering with a crime scene when he looked at some inscription on it and said, 'American Airlines.' Then someone shouted, 'Help, over here,' and we ran over and helped push an injured person on a gurney over to the road."
While the secretary of defense was pushing a gurney, Davis's radio was crackling with frantic pleas from his control room regarding Rumsfeld's whereabouts. "It was 'Dr. Cambone [Rumsfeld's closest aide] is asking, Dr. Cambone wants to find the secretary.' I kept saying, 'We've got him,' but the system was overloaded, everyone on the frequency was talking, everything jumbled, so I couldn't get through and they went on asking."
An emergency worker approached, saying that equipment and medical supplies were needed. "Tell this man what you need," said Rumsfeld, gesturing to the communications aide, apparently oblivious of the fact that there were no communications.
Once they had pushed the wounded man on the gurney over to the road, the bodyguard was finally able to lead his charge back inside the building. "I'd say we were gone fifteen minutes, max," he told me in his account of what happened that morning. Given the time it took to make their way down those Pentagon corridors - each side of the enormous building is the length of three football fields - Rumsfeld was actually at the crash site for only a fraction of that period.
Yet those few minutes made Rumsfeld famous, changed him from a half-forgotten twentieth-century political figure to America's twenty-first-century warlord. On a day when the president was intermittently visible, only Rumsfeld, along with New York mayor Rudy Giuliani, gave the country an image of decisive, courageous leadership. According to his spokesman, the sixty-nine-year-old defense secretary's "first instinct was to go out through the building to the crash site and help." Over time, the legend grew. One of the staffers in the office later assured me that Rumsfeld had "torn his shirt into strips" to make bandages for the wounded.
As we shall see, Rumsfeld was first and foremost a politician, though not always a successful one. The weeks before the attacks had been one of the unsuccessful phases, with rumors spreading in Washington that he would shortly be removed from his post. Only the day before he had lashed out at the Pentagon workforce, denouncing the assembled soldiers and civilians as "a threat, a serious threat, to the security of the United States of America." Now, his instinctive dash to the crash site could inspire loyalty and support among those he had derided. An official in the Office of Plans, Analysis and Evaluation, whose office was close to Rumsfeld's, saw him walking swiftly down the hall in the first minutes after the crash. Later, when he heard where Rumsfeld had been, he thought, "very astute, politically."
Hatred and resentment among those in his wake had been a regular feature of Rumsfeld's career, and 9/11 proved no exception. I first realized this while discussing that day with a senior White House official who had been in the Situation Room, desperately trying to coordinate a response to the bewildering disaster of the attacks. As he reminisced, I mentioned that despite the legend, it didn't seem as if Rumsfeld could have had much time for rescue work that morning.
"What was Rumsfeld doing on 9/11?" said the former official with sudden anger. "He deserted his post. He disappeared. The country was under attack. Where was the guy who controls America's defense? Out of touch!"
"He wasn't gone for very long," I observed mildly.
My friend waved his coffee mug in emphatic rebuttal. "How long does it take for something bad to happen? No one knew what was happening. What if this had been the opening shot of a coordinated attack by a hostile power? Outrageous, to abandon your responsibilities and go off and do what you don't need to be doing, grandstanding."
This conversation took place in March 2006, just before it became commonplace in Washington to speak disrespectfully of Rumsfeld, at least in anything louder than a whisper, so I was taken aback by the vehemence of his response. A minute later, this sober bureaucrat burst forth with renewed passion. "He's a megalomaniac who has to be in control at all times," he fumed. "He is the worst secretary of defense there has ever been, worse than [Robert] McNamara. He is playing a major part in destroying this presidency."
Clearly, Rumsfeld was reviled in certain parts of the Bush administration. Yet such antagonisms occur in every presidency. But what did it mean, I wondered, that Rumsfeld had "deserted his post"? Though most people assume that the chain of command runs from the president to the vice president, the cold war bequeathed a significant constitutional readjustment. In an age when an enemy attack might allow only a few minutes for detection and reaction, control of American military power became vested in the National Command Authority, which consists of the president and the secretary of defense. Collectively, the NCA is the ultimate source of military orders, uniquely empowered, among other things, to order the use of nuclear weapons. In time of war, therefore, Rumsfeld was effectively the president's partner, the direct link to the fighting forces, and all orders had to go through him.
Such orders were supposed to be transmitted from a two-story complex at the end of a narrow passageway across the corridor from Rumsfeld's office. This was the National Military Command Center, staffed twenty-four hours a day with as many as two hundred military officers and civilian staff and equipped with arrays of communications systems, including multiple screens for video conferences. "All very Star Trek," recalls an official who formerly served there.
This was the operational center for any and every crisis, from nuclear war to hijacked airliners. The command center organized conference calls enabling key officials around the government to communicate and coordinate. At 9:39 A.M. that morning, just over a minute after the Pentagon was hit, the navy captain in charge of the command center announced on the "air threat conference call" that had just begun that "an air attack on North America may be in progress," and asked that the secretary of defense come to the center. A few minutes later, the secretary's office reported back that he was nowhere to be found. The chain of command was broken.
In fact, Rumsfeld was at the crash site, though eventually it occurred to him that he might perhaps be in the wrong place: "... at some moment I decided I should be in here," he told Parade magazine in his office a month later, "figuring out what to do, because your brain begins to connect things."
Rumsfeld was back in the building by ten o'clock, but despite the anxious pleas from the military, he did not go to the command center. Instead, he headed for his office, where he spoke to President Bush, though afterward neither man could recall what they discussed. Next, in his words, he moved to "a room about 30 yards away here in this building ... that's sealable." That would have been the Executive Support Center, conference rooms "secure" against electronic eavesdropping right next door to the military command center.
Waiting here was a small group, distinguished above all else by their personal loyalty to Rumsfeld. One was Stephen Cambone, the aide who had been inquiring so anxiously for his whereabouts minutes before. Of all in Rumsfeld's court, Cambone cast the longest shadow, energetically accumulating power thanks to the protective embrace of his mentor and his acknowledged intelligence. Also there was Rumsfeld's personal chief of staff, Larry Di Rita, a former naval officer who had moved into Rumsfeld's orbit from the right-wing staff of Senator Kay Bailey Hutchison. Di Rita's defining characteristic was his devotion to the boss. (An Olympic-standard squash player, he would still dutifully lose to Rumsfeld.) The third person in the room was his spokesperson, Victoria (Torie) Clarke, a consummate public relations professional, artful enough to promote Rumsfeld - who was so secretive that he would refuse to tell his own deputy what had happened in White House meetings - as a paragon of openness and transparency.
After a brief discussion with this select group, Rumsfeld finally made his way to the military command center. It was almost 10:30. Only then, as he later explained to the 9/11 Commission, did he begin to gain "situational awareness" of what was going on. After a brief interval he spoke with Vice President Dick Cheney, who was in a bunker under the White House and for the previous forty minutes had been issuing orders to shoot down suspicious airliners.
"There's been at least three instances here where we've had reports of aircraft approaching Washington - a couple were confirmed hijack," Cheney told Rumsfeld in his favored clipped, macho style. "And pursuant to the President's instructions I gave authorization for them to be taken out."
Actually, the presidential authorization cited by Cheney consisted, at best, of the words "You bet" from Bush as Air Force One streaked out of Orlando, Florida. In any event, it was Rumsfeld, not Cheney, who was legally in the chain of command and authorized to give such an order.
"So we've got a couple of [military] aircraft up there that have those instructions at this present time?" asked Rumsfeld, still catching up.
"That is correct," replied Cheney. "And it's my understanding they've already taken a couple of aircraft out."
Together, these two men dominated the U.S. government for six years. They must have had thousands of conversations, but this snatch of dialogue, as released by the 9/11 Commission, is the only known publicly available sample of a private conversation between them. Though brief, it is instructive. Not for the last time, they were reacting to information that was wholly inaccurate - there were no more hijacked airliners in the sky. One of the planes Cheney had ordered "taken out" was United Flight 93, which crashed in Pennsylvania ten minutes before he issued the command. The other was a low-flying medevac helicopter on its way to the Pentagon. Neither man seemed concerned that the president was not involved. Cheney was usurping his authority, since he was not in the chain of command. Lacking any experience in the military, the vice president may not have realized that military commanders like precise orders, and will not proceed without them, which was why the fighter commanders chose not to pass on his aggressive instructions to the pilots.
Rumsfeld, once he had finally settled into his place at the command center, got to work on the "rules of engagement" for the fighter pilots. This was an irrelevant exercise for he did not complete and issue them until 1:00 P.M., hours after the last hijacker had died.
Later, when asked why he had taken no part in military operations that morning, Rumsfeld blithely insisted that it was not his job. "The Department of Defense," he told the 9/11 Commission in 2004, "did not have responsibility for the borders. It did not have responsibility for the airports ... a civilian aircraft was a law enforcement matter to be handled by law enforcement authorities and aviation authorities." Expanding on this theme, he explained that the Defense Department's only responsibility when a civilian plane was hijacked was to "send up an aircraft and monitor the flight, but certainly in a hijack situation [the military] did not have authority to shoot down a plane that was being hijacked." This statement was flat out untrue, but none of the commissioners dared call him to account.
Having absented himself from military involvement while the al Qaeda attacks were actually in progress on the morning of 9/11, Rumsfeld began the afternoon with the first fateful steps toward the war that would secure his historical reputation. At 12:05 P.M., CIA director George Tenet called to report that just fifteen minutes after the Pentagon had been hit, the National Security Agency (NSA) had intercepted a phone call between a known associate of Osama bin Laden in Afghanistan and someone in the former Soviet Republic of Georgia. The bin Laden associate announced that he had heard "good news," and that another target was still to be hit (presumably the intended target of Flight 93). Tenet also reported that one of the hijackers on the Pentagon plane had been linked to someone involved in the suicide attack on the USS Cole in 2000. Here was clear confirmation that the millionaire Saudi leader of al Qaeda was behind that day's attacks.
Rumsfeld was having none of it. According to Cambone's cryptic notes, the secretary felt this intelligence was "'vague,' that it might not mean something, and that there was 'no good basis for hanging hat.'" So whatever the terrorists might be saying on the phone, the secretary of defense was reserving judgment. The moment was a textbook example of Rumsfeld's standard reaction to information that did not suit his preconceptions. It would recur in the years to come.
In a brief televised press conference at 6:40 that evening, in which Rumsfeld's calm demeanor much impressed viewers, veteran Reuters Pentagon correspondent Charlie Aldinger asked, "Mr. Secretary, did you have any inkling at all, in any way, that something of this nature and something of this scope might be planned?"
"Charlie," responded Rumsfeld quickly, "we don't discuss intelligence matters." The response appeared to reflect his tough-minded prudence in times of crisis. Yet in retrospect, it is easy to understand his reluctance to pursue the subject. Two months before, an intelligence report prepared for the National Security Council (NSC) had concluded "we believe that UBL [Usama bin Laden (sic)] will launch a significant terrorist attack against U.S. and/or Israeli interests in the coming weeks. The attack will be spectacular and designed to inflict mass casualties against U.S. facilities or interests. Attack preparations have been made. Attack will occur with little or no warning." . . .

Wednesday, March 21, 2007

The Beginning of a truly historic shift!
US EPISCOPAL'S reject Reactionary African Church!
EPISCOPAL NEWS SERVICE
Bishops' 'Mind of the House' resolutionsEpiscopal News Service Issue: Section: Posted: Tuesday, March 20, 2007
The following resolutions were passed by the House of Bishops March 20 during its annual Spring retreat meeting in Navasota, Texas.
Mind of the House of Bishops Resolution Addressed to the Executive Council of the Episcopal Church
Resolved, the House of Bishops affirms its desire that The Episcopal Church remain a part of the councils of the Anglican Communion; and
Resolved, the meaning of the Preamble to the Constitution of The Episcopal Church is determined solely by the General Convention of The Episcopal Church; and
Resolved, the House of Bishops believes the proposed Pastoral Scheme of the Dar es Salaam Communiqué of February 19, 2007 would be injurious to The Episcopal Church and urges that the Executive Council decline to participate in it; and
Resolved, the House of Bishops pledges itself to continue to work to find ways of meeting the pastoral concerns of the Primates that are compatible with our own polity and canons.
Adopted March 20, 2007The House of BishopsThe Episcopal ChurchSpring Meeting 2007Camp Allen Conference CenterNavasota, Texas
To the Archbishop of Canterbury and the members of the Primates' Standing Committee:
We, the Bishops of The Episcopal Church, meeting in Camp Allen, Navasota, Texas, March 16-21, 2007, have considered the requests directed to us by the Primates of the Anglican Communion in the Communiqué dated February 19, 2007.
Although we are unable to accept the proposed Pastoral Scheme, we declare our passionate desire to remain in full constituent membership in both the Anglican Communion and the Episcopal Church.
We believe that there is an urgent need for us to meet face to face with the Archbishop of Canterbury and members of the Primates' Standing Committee, and we hereby request and urge that such a meeting be negotiated by the Presiding Bishop of The Episcopal Church and the Archbishop of Canterbury at the earliest possible opportunity.
We invite the Archbishop and members of the Primates' Standing Committee to join us at our expense for three days of prayer and conversation regarding these important matters.
Adopted March 20, 2007The House of BishopsThe Episcopal ChurchSpring Meeting 2007Camp Allen Conference CenterNavasota, Texas
A Communication to The Episcopal Church from the March 2007 Meeting of the House of Bishops
We, the Bishops of The Episcopal Church, meeting at Camp Allen, Navasota, Texas, for our regular Spring Meeting, March 16-21, 2007, have received the Communiqué of February 19, 2007 from the Primates of the Anglican Communion meeting at Dar es Salaam, Tanzania. We have met together for prayer, reflection, conversation, and listening during these days and have had the Communiqué much on our minds and hearts, just as we know many in our Church and in other parts of the world have had us on their minds and hearts as we have taken counsel together. We are grateful for the prayers that have surrounded us.
We affirm once again the deep longing of our hearts for The Episcopal Church to continue as a part of the Anglican Communion. We have gone so far as to articulate our self-understanding and unceasing desire for relationships with other Anglicans by memorializing the principle in the Preamble of our Constitution. What is important to us is that The Episcopal Church is a constituent member of a family of Churches, all of whom share a common mother in the Church of England. That membership gives us the great privilege and unique opportunity of sharing in the family's work of alleviating human suffering in all parts of the world. For those of us who are members of The Episcopal Church, we are aware as never before that our Anglican Communion partners are vital to our very integrity as Christians and our wholeness. The witness of their faith, their generosity, their bravery, and their devotion teach us essential elements of gospel-based living that contribute to our conversion.
We would therefore meet any decision to exclude us from gatherings of all Anglican Churches with great sorrow, but our commitment to our membership in the Anglican Communion as a way to participate in the alleviation of suffering and restoration of God's creation would remain constant. We have no intention of choosing to withdraw from our commitments, our relationships, or our own recognition of our full communion with the See of Canterbury or any of the other constituent members of the Anglican Communion. Indeed, we will seek to live fully into, and deepen, our relationships with our brothers and sisters in the Communion through companion relationships, the networks of Anglican women, the Anglican Indigenous Network, the Francophone Network, our support for the Anglican Diocese of Cuba, our existing covenant commitments with other provinces and dioceses, including Liberia, Mexico, Central America, Brazil, and the Philippines, our work as The Episcopal Church in many countries around the world, especially in the Caribbean, Latin America, Europe, and Taiwan, and countless informal relationships for mission around the world.
Since our General Convention of 2003, we have responded in good faith to the requests we have received from our Anglican partners. We accepted the invitation of the Lambeth Commission to send individuals characteristic of the theological breadth of our Church to meet with it. We happily did so. Our Executive Council voluntarily acceded to the request of the Primates for our delegates not to attend the 2005 meeting of the Anglican Consultative Council in Nottingham. We took our place as listeners rather than participants as an expression of our love and respect for the sensibilities of our brothers and sisters in the Communion even when we believed we had been misunderstood. We accepted the invitation of the Primates to explain ourselves in a presentation to the same meeting of the Anglican Consultative Council. We did so with joy.
At the meeting of our House of Bishops at Camp Allen, Texas in March, 2004 we adopted a proposal called Delegated Episcopal Pastoral Oversight as a means for meeting the pastoral needs of those within our Church who disagreed with actions of the General Convention. Our plan received a favorable response in the Windsor Report. It was not accepted by the Primates. At our meeting in March 2005, we adopted a Covenant Statement as an interim response to the Windsor Report in an attempt to assure the rest of the Communion that we were taking them seriously and, at some significant cost, refused to consecrate any additional bishops whatsoever as a way that we could be true to our own convictions without running the risk of consecrating some that would offend our brothers and sisters. Our response was not accepted by the Primates. Our General Convention in 2006 struggled mightily and at great cost to many, not the least of whom are our gay and lesbian members, to respond favorably to the requests made of us in the Windsor Report and the Primates' Dromantine Communiqué of 2005. We received a favorable response from the Joint Standing Committee of the Anglican Consultative Council and the Primates, which found that our effort had substantially met the concerns of the Windsor Report with the need to clarify our position on the blessing of same sex relationships. Still, our efforts were not accepted by the Primates in the Dar es Salaam Communiqué.
Other Anglican bishops, indeed including some Primates, have violated our provincial boundaries and caused great suffering and contributed immeasurably to our difficulties in solving our problems and in attempting to communicate for ourselves with our Anglican brothers and sisters. We have been repeatedly assured that boundary violations are inappropriate under the most ancient authorities and should cease. The Lambeth Conferences of 1988 and 1998 did so. The Windsor Report did so. The Dromantine Communiqué did so. None of these assurances has been heeded. The Dar es Salaam Communiqué affirms the principle that boundary violations are impermissible, but then sets conditions for ending those violations, conditions that are simply impossible for us to meet without calling a special meeting of our General Convention.
It is incumbent upon us as disciples to do our best to follow Jesus in the increasing experience of the leading of the Holy Spirit. We fully understand that others in the Communion believe the same, but we do not believe that Jesus leads us to break our relationships. We proclaim the Gospel of what God has done and is doing in Christ, of the dignity of every human being, and of justice, compassion, and peace. We proclaim the Gospel that in Christ there is no Jew or Greek, no male or female, no slave or free. We proclaim the Gospel that in Christ all God's children, including women, are full and equal participants in the life of Christ's Church. We proclaim the Gospel that in Christ all God's children, including gay and lesbian persons, are full and equal participants in the life of Christ's Church. We proclaim the Gospel that stands against any violence, including violence done to women and children as well as those who are persecuted because of their differences, often in the name of God. The Dar es Salaam Communiqué is distressingly silent on this subject. And, contrary to the way the Anglican Communion Network and the American Anglican Council have represented us, we proclaim a Gospel that welcomes diversity of thought and encourages free and open theological debate as a way of seeking God's truth. If that means that others reject us and communion with us, as some have already done, we must with great regret and sorrow accept their decision.
With great hope that we will continue to be welcome in the councils of the family of Churches we know as the Anglican Communion, we believe that to participate in the Primates' Pastoral scheme would be injurious to The Episcopal Church for many reasons.
First, it violates our church law in that it would call for a delegation of primatial authority not permissible under our Canons and a compromise of our autonomy as a Church not permissible under our Constitution.
Second, it fundamentally changes the character of the Windsor process and the covenant design process in which we thought all the Anglican Churches were participating together.
Third, it violates our founding principles as The Episcopal Church following our own liberation from colonialism and the beginning of a life independent of the Church of England.
Fourth, it is a very serious departure from our English Reformation heritage. It abandons the generous orthodoxy of our Prayer Book tradition. It sacrifices the emancipation of the laity for the exclusive leadership of high-ranking Bishops. And, for the first time since our separation from the papacy in the 16th century, it replaces the local governance of the Church by its own people with the decisions of a distant and unaccountable group of prelates.
Most important of all it is spiritually unsound. The pastoral scheme encourages one of the worst tendencies of our Western culture, which is to break relationships when we find them difficult instead of doing the hard work necessary to repair them and be instruments of reconciliation. The real cultural phenomenon that threatens the spiritual life of our people, including marriage and family life, is the ease with which we choose to break our relationships and the vows that established them rather than seek the transformative power of the Gospel in them. We cannot accept what would be injurious to this Church and could well lead to its permanent division.
At the same time, we understand that the present situation requires intentional care for those within our Church who find themselves in conscientious disagreement with the actions of our General Convention. We pledge ourselves to continue to work with them toward a workable arrangement. In truth, the number of those who seek to divide our Church is small, and our Church is marked by encouraging signs of life and hope. The fact that we have among ourselves, and indeed encourage, a diversity of opinion on issues of sexuality should in no way be misunderstood to mean that we are divided, except among a very few, in our love for The Episcopal Church, the integrity of its identity, and the continuance of its life and ministry.
In anticipation of the traditional renewal of ordination vows in Holy Week we solemnly declare that "we do believe the Holy Scriptures of the Old and New Testaments to be the Word of God, and to contain all things necessary to salvation; and we do solemnly engage to conform to the doctrine, discipline, and worship of The Episcopal Church." (Book of Common Prayer, page 513)
With this affirmation both of our identity as a Church and our affection and commitment to the Anglican Communion, we find new hope that we can turn our attention to the essence of Christ's own mission in the world, to bring good news to the poor, to proclaim release to the captives and recovery of sight to the blind, to liberate the oppressed, and to proclaim the year of the Lord's favor (Luke 4:18-19). It is to that mission that we now determinedly turn.
Adopted March 20, 2007The House of BishopsThe Episcopal ChurchSpring Meeting 2007Camp Allen Conference CenterNavasota, Texas

Sunday, March 11, 2007

Andrew Sullivan/Times of London
There’s a new nasty party out there
Republicans are reeling from Scooter Libby’s conviction, Iraq and the taint of corruption. Is it game over for the American right, asks Andrew Sullivan
Scandals come and go in politics. In second terms for presidents, they invariably happen. The last two two-term presidents had crippling scandals knock them sideways in their final years. Reagan had Iran-contra. Clinton had Lewinsky. They still managed to achieve things — especially Reagan. But what’s happening now in Washington feels both superficially less riveting and more damaging.
Think of it as a perfect storm of many, many scandals, meeting over the increasingly warm political water of the Iraq war and becoming something potentially more lethal. In many ways the entire future of American conservatism is at stake.
Last week Dick Cheney’s right-hand man, “Scooter” Libby, was convicted on four counts of lying to a jury about events related to prewar Iraq intelligence. For many in Washington it came as a shock. Libby is well liked, has an impeccable record, is usually scrupulous in his dealings and was close to Cheney, the real power in the Bush administration.
He lied to conceal that he had helped to disseminate a government leak that Joe Wilson, a critic of the administration’s case for prewar weapons of mass destruction, was unqualified to make such a claim and had been given an investigatory role only because his wife at the CIA had pulled some strings. It was a hit job in a town where attacks and counter-attacks are par for the course.
Why didn’t the vice-president or Libby just come out in public and say that Wilson was wrong, they stood by their intelligence and Wilson’s wife was the real reason why Wilson was sent to investigate claims that Saddam Hussein had sought to buy uranium in Niger? Because Wilson’s wife had been a covert CIA agent and revealing her identity might be a crime. Hence the campaign to disseminate the information as background through various journalists.
The campaign to smear Wilson was orchestrated by Cheney and Karl Rove, “Bush’s brain”. The first man to leak was Richard Armitage, the then deputy secretary of state, not Libby. But Libby was a team player. If he had confessed when asked, the news would have hit Washington before the last election. So he perjured himself. And the perjury will now land him in jail.
The Republicans have tried to argue that this is a minor issue. But since they impeached the last president for perjury in a civil suit, that’s a hard argument to make. Libby’s conviction, moreover, fits into the exact pattern the prosecutor, Patrick Fitzgerald, has used throughout his career. He targets big figures by trapping minor figures in lies and perjury and then squeezes them to tell more about the malfeasance of their superiors.
The question now is: will Libby cop a plea with Fitzgerald to reduce his jail time in return for some information on who really committed the crime of exposing a CIA agent’s identity? Most think that’s unlikely. Bush will pardon Libby — probably as soon as he leaves office in 2009.
“There’s a cloud over the White House as to what happened,” Fitzgerald said in his closing statement in the trial. Last week Fitzgerald declared the matter closed. But when he was asked if he could possibly reopen the inquiry at some point, he replied: “If information comes to light . . . we’ll, of course, do that.”
The salience of all this is simple. It gets to the question of whether Bush and Cheney deliberately misled the public on the reasons for going to war in Iraq. Why was Cheney so desperate to smear a minor critic of the intelligence? Why would he risk so much — including sending his key aide to jail — to stop any further inquiries? It looks awfully defensive — and far more compatible with the notion that the intelligence was fixed to fit the war than that the war was based on good-faith intelligence.
If that weren’t enough, last week eight US attorneys testified before the Senate judiciary committee. They were all Republican appointees and their job is to prosecute public corruption or malfeasance if it comes to light in their jurisdiction, regardless of who it is and what party they belong to. All eight testified that they had been pressured to bring cases against Democrats before last year’s elections. When they refused to do so, they found themselves fired after the election. All had exemplary performance records. The calls for them to go came from Republican officeholders in Washington.
The pattern may be even more widespread. Since Bush came to power, the number of local politicians investigated or indicted by US attorneys shows a disturbing pattern. A new study reported last Friday in The New York Times showed that out of 375 such cases, 10 were against independents, 67 were against Republicans and 298 were against Democrats. Under Bush and Rove, the whole concept of a fair justice system has taken a Nixonian turn.
Last week Alberto Gonzales, the attorney-general, agreed to relinquish all power to appoint US attorneys. He all but surrendered a critical part of his job, because the Senate does not trust him to enforce justice equally — his fundamental duty. The money in Washington is that he will be forced to resign soon.
On top of this is the treatment of injured Iraq war veterans. In the past few weeks, stories have been pouring in of squalid conditions at the Walter Reed veterans’ hospital — with rats, mould and insanitary conditions widespread — and, indeed, in veterans’ hospitals across the country. Iraq is not Vietnam. Americans are firmly behind the troops, in many cases deeply concerned about the way they have been treated in this war. They were sent in too few numbers, with insufficient armour and, because of medical technology, many are now surviving with injuries that would have killed them in previous wars.
That these men and women would return home and be treated with worse than bog-standard medical care is a source of rage especially in red (Republican) America, where so many service personnel come from.
The whole scandal speaks of the incompetence, negligence and callousness that has marked the management of the war from the word go. It has echoes of Hurricane Katrina in New Orleans — where poor people were left to fend for themselves in the face of government incompetence. Except that these people are soldiers, troops — heroes in many cases — trying to walk on one leg or learn how to speak again or see again, in conditions that are a disgrace for an advanced country.
Moreover, the image of the Republicans has also taken a turn for the worse. The Tories didn’t realise until much too late that they had become the “nasty party”, a party associated with callousness, corruption and prejudice.
But last weekend, at the biggest conservative activist conference of the year, on a podium shared by Mitt Romney and Rudy Giuliani, something sickening happened. I was there. Ann Coulter, the right’s answer to Michael Moore, gave a speech. She has written five bestsellers and after 9/11 opined that the West should “invade their [Islamic] countries, kill their leaders and convert them to Christianity”.
You would think this might have rendered her off limits to a Republican convention. You’d be wrong. They love her. This is what she said last week: “I was going to have a few comments on the other Democratic presidential candidate John Edwards, but it turns out you have to go into rehab if you use the word ‘faggot’, so I can’t really talk about Edwards.” The crowd laughed and cheered.
She insisted calling someone a “faggot” has nothing to do with homosexuality. If you want to appeal to all Americans, including anyone who thinks bigotry is ugly, the Republicans have a funny way of going about it.
Now recall the pickle the Republicans are in with their presidential candidates. The leader of the pack, Giuliani, is in favour of gay civil unions, backs abortion rights, supports gun control and had a publicly acrimonious break-up with his second wife. Giuliani’s current big lead could be vulnerable to collapse.
A Wall Street Journal poll found: “Fully three of four Republicans — including a majority of those backing the former New York city mayor — say they would have reservations if they learnt Mr Giuliani supports abortion rights and supports civil unions for gay couples.”
At the conference, Mitt Romney, the current darling of the religious right, got his biggest applause not for attacking the Democrats but for landing a blow on John McCain, his Republican rival for the nomination. McCain is hated by many Republicans almost as much as Hillary Clinton and once termed the religious right “agents of intolerance”. That gives some indication of the mood of the average Republican.
Is this the Republican crack-up that has been predicted confidently for much of the past 20 years? I don’t know. The thing about conservatives is that, in general, they don’t like to commit political suicide. They are more comfortable wielding power than some on the left. There are many Republicans who are dismayed by the corruption in Washington, appalled by the bigotry that seems to have taken over their party’s core, angry at the mismanagement of the Iraq war, furious over Katrina and even more worried about the massive debt this administration will leave the next generation. And it gets worse.
Usually, Republicans have national security to fall back on in electoral troubles. It rescued them after Water-gate; it has sustained Bush for more than five years. But the war in Iraq has robbed them of this critical electoral tool. Dinesh D’Souza, a key figure in America’s conservative in-telligentsia, understands what makes all this so lethal.
“If the left can convert national security — usually a source of strength for the right — into a liability, then it has vastly improved its chances for winning future elections . . . the entire conservative agenda, from tax cuts to school choice to restricting abortion, would be stalled,” he says.
“Moreover, the right’s political loss would be followed by a cultural assault seeking to demonise Bush as another Nixon and conservatives as dangerous fanatics who cannot again be trusted with power . . . the right risks losing everything.”
Losing everything? I don’t know. But for the Republicans, for the first time, it’s a real possibility.

Saturday, March 10, 2007

From a Reader to Andrew Sullivan
You wrote that your blog "can sometimes be an embarrassing series of recognitions of my own naivete". I often think of your blog as "The Education of Andrew". You are incredibly naive when it comes to the real, down-home nature of American conservatism. All this crap you complain about now - this is the real conservatism. This is the conservative id, broken through the conservative super-ego and run rampant. It's true there's some attempt by the super-ego to reassert some kind of control - you detailed several attempts last week, by Gates, Fitzgerald, etc. - but this is the real energy that underlies and animates much of American grassroots conservatism, and always has: a blend of intolerance, machismo, a cultural resentment stemming directly back to the Civil War, anti-intellectual no-nothingness, Christianism - with all its attendant arrogance, anti-democratic self-righteousness and hidden nihilism - and a just plain old blind pig-headedness, which GWB exemplifies in spades. Nary an Edmund Burke or Michael Oakeshott to be seen.
It's true the Bush years have been a perfect storm, the intersection of a particular modern conservative coalition with new media technologies, an almost equally corrupt liberalism, and world events, but if you cannot trace the strains we see in this current mess back into our long history, you just don't understand this country yet. Bush and company are not the exception, they're the proof; they're not something new, some aberration, they're just the same old same old come bubbling up from underneath and finally to power. And that's why Bush's (s)election in 2000 was the best thing that could have happened to this country, a real blessing in disguise.
Now we can see what's really down there in the dark. We can reject it, and go on - and we'd better go on, because with what we have to face nationally, internationally and globally, we don't have much more time to waste on this stupidity. "Conservatism", as we have known it, is over. Too bad the cost has been so high. But as Jung said, those who will not learn will be made to feel.

Thursday, March 08, 2007

Slowly but surly the dark truth of a secret program of capture, torment and possible murder of terror suspects is being revealed. In due time the grim details of this activity will destroy George Bush and his Administrations place in history.

Soviet-era compound in northern Poland was site of secret CIA interrogation, detentions
03/07/2007 @ 9:53 amFiled by Larisa Alexandrovna and David Dastych
US, Britain asked Poland to join clandestine program


POLAND -- The CIA operated an interrogation and short-term detention facility for suspected terrorists within a Polish intelligence training school with the explicit approval of British and US authorities, according to British and Polish intelligence officials familiar with the arrangements.
Intelligence officials identify the site as a component of a Polish intelligence training school outside the northern Polish village of Stare Kiejkuty. While previously suspected, the facility has never been conclusively identified as being part of the CIA's secret rendition and detention program.
Only the Polish prime minister and top Polish intelligence brass were told of the plan, in which agents of the United States quietly shuttled detainees from other holding facilities around the globe for stopovers and short-term interrogation in Poland between late 2002 and 2004.
According to a confidential British intelligence memo shown to RAW STORY, Prime Minister Tony Blair told Poland's then-Prime Minister Leszek Miller to keep the information secret, even from his own government.
“Miller was asked to keep it as tight as possible,” the memo said.
The complex at Stare Kiejkuty, a Soviet-era compound once used by German intelligence in World War II, is best known as having been the only Russian intelligence training school to operate outside the Soviet Union. Its prominence in the Soviet era suggests that it may have been the facility first identified – but never named – when the Washington Post’s Dana Priest
revealed the existence of the CIA’s secret prison network in November 2005.
Reached by telephone Monday, Priest would not discuss the allegations in her article beyond her original report.
CIA spokesman Paul Gimigliano would not confirm or deny any allegations about the Polish facility. He maintained the rendition program was legal and conducted “with great care.”
“The agency’s terrorist interrogation program has been conducted lawfully, with great care and close review, producing vital information that has helped disrupt plots and save lives,” Gimigliano said Monday. “That is also true of renditions, another key, lawful tool in the fight against terror.”
“The United States does not conduct or condone torture, nor does it transfer anyone to other countries for the purpose of torture,” he added.
US intelligence officials confirmed that the CIA had used the compound at Stare Kiejkuty in the past. Speaking generally about the agency’s program, a former senior official said the CIA had never conducted unlawful interrogations.
“We never tortured anyone,” one former senior intelligence official said on condition of anonymity. “We sent them to countries that did torture, but not on this scale.”
The official added that many agency staff had strong feelings about the rendition program. “Career people were really opposed to this.”
All intelligence sources interviewed said the CIA is no longer operating a rendition or secret detainment program.
Polish intelligence officials declined to comment. Zbigniew Siemiatkowski, the former head of Polish intelligence,
told a Polish news agency in 2005, however, that the CIA had access to two internal zones at the Stare Kiejkuty training school. Current and former Polish authorities have adamantly denied that Poland played any role in the clandestine program.
US, United Kingdom invited Poland to join program in 2002
In April 2002, according to British foreign intelligence sources (MI6), senior officials in the Bush and Blair administrations decided that the Bagram base near Kabul in Afghanistan could not operate successfully in the Bush administration’s “no holds barred” policy towards suspected terrorists.
MI6 officials say the two administrations then decided to fly high-value suspected terrorists to secret gulags in Eastern Europe. The CIA-operated flights would pass through the air space of a number of countries – among them Britain, Germany, Spain and Poland. European Union officials and human rights groups would later say these interrogations may have violated the Geneva Conventions and the
United Nations Convention against Torture, to which the United States and Poland are both signatories.
After a series of secret meetings chaired by MI6 chief Sir John Scarlett in London and then-CIA Director George Tenet in Washington, Polish intelligence was invited to join the project, British and Polish intelligence sources say.
Authorities singled out a remote and infrequently used airfield in the Northern Polish town of Szymany for transit flights; a near-by Polish intelligence training school at Stare Kiejkuty would be used as an eventual detention-interrogation center for temporary detention and short-term interrogations.
The White House did not return two calls seeking comment. Tenet could not be reached.
Rendition programs were first employed by the Clinton administration in order to target suspected elements of al Qaeda. These covert operations, run out of the CIA, were used intermittently and on a limited basis. It was not until the Bush Administration that the use of extraordinary rendition became a matter of policy and was employed on a large scale.
The Szczytno-Szymany Airport
Szczytno-Szymany used to be a military airfield in northeastern Poland, one of many such airstrips that could accept the large Soviet-made military planes of the Warsaw Pact; before that, it had served as an airstrip for German Luftwaffe bombers targeting Warsaw in the Second World War. In 1996, seven years after Poland’s communist government fell, the military airfield was turned into a private company: Airports “Mazury-Szczytno.”
However, traffic wasn’t heavy enough to provide decent income to the state and private owners of the airfield, so motorcycle and car races were organized on the tarmac; small-scale production and repairs also buttressed the company’s budget.
But after the start of Operation Enduring Freedom – the US military campaign against Afghanistan in response to the Sept. 11, 2001 attacks – everything changed. In the years that followed, American planes began arriving from Afghanistan, continuing on to Morocco, Uzbekistan and Guantanamo Bay, according to Szymany locals and airport staff.
Then-Szymany airport manager Mariola Przewlocka told European Union investigators the flights were likely linked with the intelligence complex at Stare Kiejkuty, about 12 miles away from the airport.
Przewlocka said that whenever one of the suspected flights was scheduled to land, “orders were given directly by the regional border guards… emphasizing that the airport authorities should not approach the aircraft and that military staff and services alone” would handle landings.
“Money for the services was paid in cash, sometimes as much as four times the normal charge,” the former airport manager added. “Handling of the passengers aboard was carried out in a remote corner of the Szymany airstrip. People came in and out from four-wheel drive cars with shaded windows.”
The cars were seen traveling to and from the Stare Kiejkuty intelligence facility, where British and Polish intelligence officials say US agents conducted short-term interrogations before shuffling prisoners to other locations.
Przewlocka also spoke
in detail with the Chicago Tribune, whose correspondent traveled to Szymany last month.
“Secret prisons” were likely temporary “black sites”
Former European and US intelligence officials indicate that the secret prisons across the European Union, first identified by the Washington Post, are likely not permanent locations, making them difficult to identify.
What some believe was a network of secret prisons was most probably a series of facilities used temporarily by the United States when needed, officials say. Interim “black sites” – secret facilities used for covert activities – can be as small as a room in a government building, which only becomes a black site when a prisoner is brought in for short-term detainment and interrogation.
For example, detainees could be shuffled from a temporary black site in one country to a temporary black site in another country, never staying long enough at either to attract notice. Such an arrangement, sources say, would allow plausible deniability by the host country as well as the US. Investigators looking for a permanent facility would never find one. Such a site, sources say, would have to be near an airport.
Washington-based security expert and president of Global Security John Pike says short-term detention in already existing facilities would be “sensible tradecraft” and a more likely scenario than a network of specific, long term prisons.
“A short-term operation does not develop a big signature and you don’t have a continual parade of people,” said Pike. “When it becomes noticeable, they move it all.”
“It’s a shell game,” he added.
Pressure from US and Britain to keep quiet
In the wake of the Washington Post expose, member countries of the European Union began to demand answers.
According to British and Polish intelligence officials, foreign journalists, and EU sources interviewed for this article, the countries participating in the US rendition and detention program and their governments were kept largely out of the loop. Officials say Bush and Blair administration contacts selectively chose politicians in the EU and other countries, keeping their respective governments in the dark.
Having only a select few members of the European Union aware of the program, coupled with the transience of the prison network, made it difficult for European Union investigators to verify allegations of secret detention sites.
A ten-member EU delegation traveled to Poland in November 2006 to investigate Szymany airport and the facility at Stare Kiejstuty.
The team’s report indicates that key government officials first agreed to meet with the delegates, but declined to do so after their arrival.
The delegates requested interviews of 20 Polish government officials, journalists and others, but were allowed to speak with only nine. Of those interviewed, only a handful could offer any substantive information.
One of the more interesting interviews came from former Szczytno-Szymany Airport chairman Jerzy Kos. According to the report, Kos stated that at the time the airport was under his authority, it belonged to the Military Property Agency and was leased by his company.
Kos stated that after a Boeing 737 landed on Sept. 22, 2003, a standard military procedure came into force under which Polish Border Guards determined the character of incoming flights and expedited certain arrivals.
“The military procedure was a simplified one, including provision for no customs clearance,” Kos told investigators. He said he had “no information about the passengers as procedure was undertaken by soldiers and not the civilian airport staff.”
Kos asserted that during his tenure from 2003 to 2004, Gulfstream planes transferring through the airport were treated as military flights in the same fashion as the Border Guards had handled the Boeing 737 in September 2003.
Air traffic controllers “had been informed by the Warsaw-based Air Traffic Agency that Gulfstream planes would land at the airport by fax,” Kos told investigators.
Polish public television journalist Adam Krzykowski added more detail.
Krzykowski alleged that the September 2003 Boeing 737 carried a crew of seven and was joined at the Szymany airfield by five passengers who declared themselves businessmen. According to the EU report, Krzykowski maintained that all twelve “were American citizens.”
“The Boeing flight was not subject to standard border control procedure, but to a … simplified procedure [which] meant that no customs officers were present during the control and passengers were checked only on basis of a list delivered to the Border Guards,” he said. “According to the Border Guards, such a procedure is used when a person has already been checked up on previously.”
The final report of the European Union’s investigation into Poland as well as the other countries alleged to be part of the rendition program
can be read here. Most of those the EU sought to question did not cooperate with investigators, including suspected governments, journalists and key officials in the United States.
Dana Priest, the Washington Post reporter who received a Pulitzer Prize for her article exposing the CIA’s secret detention centers, declined to speak with EU investigators.
“The Post never allows its reporters to testify to government inquiries no matter what government it is, so there was nothing unusual in that regard,” Priest said Monday.
The only member of the Bush Administration given leave to discuss the program with the EU was
Secretary of State Condoleezza Rice, who said she expected American allies to co-operate and keep quiet about sensitive anti-terrorism operations.”
The Reopening of Szymany Airport
The “prime-time” for Szymany International Airport seems to have ended in 2006, when the investigation by the European Parliament was finished without a clear result or definitive proof of “CIA secret prisons” existing in Poland.
Polish officials refused to cooperate and vehemently denied any role in the CIA program. The airport company had to suspend its activities, due to a dispute over the ownership of the Szczytno-Szymany airfield.
In November 2006, the company signed a lease agreement with the Military Property Agency, which still owns the land and the facilities. This agreement opened the way for financing of the airport by the regional administration and the Polish government.
The Szymany airfield, now in civilian hands and allegedly free of “rendition flights,” will soon become a regional airport. Its beautiful location in the Masurian Lakes Region will likely kindle its development, and the fame of its history surrounding secret CIA flights could certainly become an attractive tourist-catching slogan.

Muriel Kane contributed research for this article and John Byrne contributed reporting.
Larisa Alexandrovna is Managing Investigative News Editor and intelligence and national security correspondent for Raw Story. She can be reached at:
larisa@rawstory.com
David Dastych is a former Polish intelligence operative, who served in the 1960s-1980s and was a double agent for the CIA from 1973 until his arrest in 1987 by then-communist Poland on charges of espionage. Dastych was released from prison in 1990 after the fall of communism and in the years since has voluntarily helped Western intelligence services with tracking the nuclear proliferation black market in Eastern Europe and the Middle East. After a serious injury in 1994 confined him to a wheelchair, Dastych began a second career as an investigative journalist covering terrorism, intelligence and organized crime.
RAW STORY INVESTIGATES

Wednesday, March 07, 2007

ANALYSIS
In public's mind, White House is guilty
Its campaign to discredit its detractors will be remembered after Libby is forgotten

Marc Sandalow, Washington Bureau Chief SF Chronicle Wednesday, March 7, 2007

(03-07) 04:00 PST Washington -- People will soon forget the details of the Lewis "Scooter" Libby case, if they knew them in the first place.
Whether the vice president's former chief of staff gets 25 years in prison or even a presidential pardon after his conviction Tuesday for lying and obstructing justice is of little consequence to most Americans.
What will endure is damning testimony that confirms the public's worst fears about the Bush administration's behavior during the lead-up to the war in Iraq and its truthfulness since then.
The monthlong trial established beyond a reasonable doubt that White House officials at the highest level conducted a campaign to discredit those who questioned their declarations about Iraq's weapon capabilities -- declarations that turned out to be wrong.
And the testimony showed that President Bush either was lying about the White House's role in outing a CIA officer at the center of the scandal or was kept in the dark by top aides who defied his orders to come forward.
"I don't know of anybody in my administration who leaked classified information,'' Bush declared in September 2003. "If somebody did leak classified information, I'd like to know about it, and we'll take the appropriate action. ... I want to know who the leakers are.''
Testimony at the trial made plain that when Bush spoke those words, Karl Rove, his top political aide; Ari Fleischer, his press secretary; Richard Armitage, the No. 2 man at the State Department; and Libby had each discussed the matter, on background, with reporters.
Prosecutors apparently did not feel they had enough evidence to prove that the leaks were criminal, and Libby's conviction was limited to his dishonesty. Yet the trial raised questions about who directed the leaks and why it took a federal investigation to uncover information that apparently was widely known inside the administration.
"The trial has been very embarrassing to the White House,'' said Jonathan Turley, a professor of law at George Washington University who has been closely monitoring the case.
"This is one of the most closed administrations in modern history, overtly hostile to any transparency over how it operates,'' Turley said. "The trial riveted the city because it gave a rare insight into the Bush White House. And what we saw was hardly flattering to either the president or the vice president.''
Bush watched news of the verdict on television in the Oval Office and was "saddened for Scooter Libby and his family,'' according to White House spokeswoman Dana Perino. Vice President Dick Cheney issued a two-sentence statement that said he was "disappointed with the verdict.''
Libby showed no emotion as the verdict was read and declined to talk to reporters as he left the courthouse.
"We have every confidence Mr. Libby ultimately will be vindicated,'' Libby's attorney Theodore Wells told reporters, pledging to push for a new trial or to appeal the verdict if that effort fails. "We believe Mr. Libby is totally innocent and that he didn't do anything wrong.''
Jurors spent 10 days dissecting the testimony of 19 witnesses and hundreds of pages of documents to determine whether Libby had lied to federal investigators and a grand jury about his role in the leaks or had simply suffered -- as he asserted -- from a poor memory.
The complexities of the case made it difficult to follow even for many Washingtonians with a stake in the outcome.
Libby was charged with impeding an investigation into whether anyone violated a little-known law that makes it a crime to disclose information about a covert agent with the intention of exposing the agent's identity.
The circumstances involved former Ambassador Joseph Wilson's trip to Niger in 2002 to gather evidence regarding allegations that Saddam Hussein was purchasing enriched uranium and other materials there needed to build a nuclear bomb in Iraq. Wilson, dispatched by the CIA, quickly determined that the charges were not credible and informed the administration. He was surprised to hear Bush repeat the allegations during his 2003 State of the Union Address and wrote an op-ed piece in New York Times in the summer of 2003 titled "What I didn't find in Africa.''
"Based on my experience with the administration in the months leading up to the war, I have little choice but to conclude that some of the intelligence related to Iraq's nuclear weapons program was twisted to exaggerate the Iraqi threat,'' Wilson wrote.
Within days of the article's appearance, administration officials, including Libby, began telling reporters that Wilson's wife, Valerie, worked for the CIA, which was classified information. The leaks appear to have been motivated by Cheney's concern about reports that his office had dispatched Joseph Wilson to Niger. Apparently they were intended to undermine Wilson's credibility by making it appear that he had been sent on a junket by his wife, who was known by her maiden name, Valerie Plame. Some Bush critics have suggested that Valerie Wilson's identity was leaked to ruin her career in retribution for her husband's outspoken criticism of the administration.
Libby told a grand jury that he simply repeated information told to him by Tim Russert, NBC Washington bureau chief. Russert testified that such a conversation never took place. Libby did not testify at his own trial, but his lawyers insisted his misstatements were the result of a bad memory, not any intention to deceive.
In the end, the jury found Libby guilty of one count of obstruction of justice, two counts of perjury and one count of lying to the FBI. Sentencing is scheduled for June 5. Libby faces up to 25 years in prison, but experts said it is unlikely he will receive a sentence longer than several years.
The felony convictions are likely to further erode the White House's credibility on Iraq, which is already at an all-time low.
Bush has never reconciled his public statements demanding to know the identity of the leakers with evidence that showed they included some of his top aides. If Bush didn't already know their identity, his most trusted advisers were willfully defying his instructions.
"We could clarify this thing very quickly if people who have got solid evidence would come forward and speak out. And I hope they would,'' Bush declared to no avail in 2003.
Among those who likely had a good knowledge of the leakers was Cheney, who discussed the identity of Wilson's wife with Libby a few weeks before it became public.
The trial showed the extent to which senior Bush aides, including Cheney, were alarmed at Wilson's public rebuke of their assertions about Iraq's nuclear weapons. Among the evidence submitted was a copy of Wilson's op-ed piece with Cheney's notes scribbled in the margin demanding to know if Wilson's wife had sent him there on a "junket.''
"Clearly Cheney understood immediately that this article could produce a cascading political crisis for the White House,'' Turley said of the evidence. "The trial revealed a surprising level of both hysteria and hypocrisy in the White House.''
The White House was not the only institution tarnished in the trial. The witnesses included at least nine prominent Washington journalists, whose testimony made plain the cozy relationship between some top administration officials and the reporters who cover them.
"This is a quintessential Washington morality play where there are no redeeming characters,'' Turley said.
The Libby verdict
What happened: Vice President Dick Cheney's former chief of staff, Lewis "Scooter" Libby, was convicted of lying and obstructing the investigation into how the name of CIA official Valerie Wilson - wife of an Iraq war critic - was leaked to reporters in 2003.
Counts: Libby was found guilty of one count of obstruction of justice, two counts of perjury and one count of making false statements to the FBI. He was acquitted on one count of making false statements to the FBI.
Punishment: At his sentencing June 5, he could receive a total of 25 years in prison and could be fined as much as $1 million.
Case chronology
2003
Jan. 28: President Bush asserts in his State of the Union address: "The British government has learned that Saddam Hussein recently sought significant quantities of uranium from Africa."
March 19-20: The U.S.-led invasion of Iraq begins.
May 6: New York Times columnist Nicholas D. Kristof reports that a former ambassador, whom he does not name, had been sent to Niger in 2002 and reported to the CIA and State Department well before Bush's speech that the uranium story was based on obviously forged documents.
May 29: Lewis "Scooter" Libby, Vice President Dick Cheney's chief of staff, asks Marc Grossman, an undersecretary of state, for information about the ambassador's travel to Niger. Grossman later tells Libby that Joseph Wilson was the former ambassador.
June 11 or 12: Grossman tells Libby that Wilson's wife works at the CIA and that State Department personnel are saying Wilson's wife was involved in planning the trip. A senior CIA officer gives him similar information, as does Cheney's top press aide, Cathie Martin, who had learned it from CIA spokesman Bill Harlow.
June 12: Cheney advises Libby that Wilson's wife works at the CIA.
June 13: Deputy Secretary of State Richard Armitage tells Washington Post reporter Bob Woodward that Wilson's wife works for the CIA.
June 23: Libby tells New York Times reporter Judith Miller that Wilson's wife might work at a CIA bureau. Libby denies saying that.
July 7: Libby meets with then-White House press secretary Ari Fleischer. Fleischer says Libby tells him that Wilson's wife works at the CIA and that the information is "hush hush." Libby denies that.
July 8: Libby meets with Miller. She recalls Libby saying he believes Wilson's wife works for the CIA. Libby denies saying that.
July 8: Syndicated newspaper columnist Robert Novak interviews Armitage, who tells him that Wilson's wife works for the CIA. Novak says this was confirmed the next day by White House political adviser Karl Rove.
July 10: Libby calls Tim Russert, NBC Washington bureau chief, to complain about a colleague's news coverage. At the end of the conversation, Libby says, Russert tells him that "all the reporters know" that Wilson's wife works at the CIA. Libby says he was surprised to hear it. Russert denies saying it.
July 11: Fleischer tells two reporters that Wilson's wife works for the CIA. Rove tells Time magazine's Matthew Cooper that Wilson's wife works for the
CIA.
July 12: Libby confirms to Cooper that he has heard that Wilson's wife was involved in sending Wilson on the trip. Libby also speaks to Miller and discusses Wilson's wife and says that she works at the CIA. Libby claims he told Cooper and Miller he only knew about Valerie Wilson from talking to other reporters.
July 14: Novak reports that Wilson's wife is a CIA operative on weapons of mass destruction and that two senior administration officials, whom Novak did not name, said she suggested sending her husband to Niger to investigate the uranium story.
Sept. 26: A criminal investigation is authorized to determine who leaked Valerie Wilson's identity to reporters. A short time later, Armitage tells investigators that he may have inadvertently leaked her identity to Woodward.
Dec. 30: U.S. Attorney Patrick Fitzgerald is named to head the leak investigation.
2004
January: A grand jury begins investigating the possibility of federal criminal law violations.
March 5 and March 24: Libby testifies before the grand jury, saying that he forgot the information about Valerie Wilson working for the CIA until he heard it from Russert.
2005
Oct. 28: Libby is indicted on five counts: one of obstruction of justice and two counts each of false statement and perjury.
2006
Sept. 7: Armitage admits he leaked Valerie Wilson's identity to Novak and Woodward. Armitage says he did not realize her job was covert.
2007
Jan. 16: Libby's trial begins with jury selection.
March 6: Jurors return guilty verdicts on four of five counts.
Associated Press
Marc Sandalow
San Francisco Chronicle Sections DatebookCommentaryFoodSportsNewsBay AreaHome&GardenBusiness

© 2007 Hearst Communications Inc.

Sunday, March 04, 2007



March 4, 2007
Darwin’s God
By ROBIN MARANTZ HENIG
God has always been a puzzle for Scott Atran. When he was 10 years old, he scrawled a plaintive message on the wall of his bedroom in Baltimore. “God exists,” he wrote in black and orange paint, “or if he doesn’t, we’re in trouble.” Atran has been struggling with questions about religion ever since — why he himself no longer believes in God and why so many other people, everywhere in the world, apparently do.
Call it God; call it superstition; call it, as Atran does, “belief in hope beyond reason” — whatever you call it, there seems an inherent human drive to believe in something transcendent, unfathomable and otherworldly, something beyond the reach or understanding of science. “Why do we cross our fingers during turbulence, even the most atheistic among us?” asked Atran when we spoke at his Upper West Side pied-à-terre in January. Atran, who is 55, is an anthropologist at the National Center for Scientific Research in Paris, with joint appointments at the
University of Michigan and the John Jay College of Criminal Justice in New York. His research interests include cognitive science and evolutionary biology, and sometimes he presents students with a wooden box that he pretends is an African relic. “If you have negative sentiments toward religion,” he tells them, “the box will destroy whatever you put inside it.” Many of his students say they doubt the existence of God, but in this demonstration they act as if they believe in something. Put your pencil into the magic box, he tells them, and the nonbelievers do so blithely. Put in your driver’s license, he says, and most do, but only after significant hesitation. And when he tells them to put in their hands, few will.
If they don’t believe in God, what exactly are they afraid of?
Atran first conducted the magic-box demonstration in the 1980s, when he was at
Cambridge University studying the nature of religious belief. He had received a doctorate in anthropology from Columbia University and, in the course of his fieldwork, saw evidence of religion everywhere he looked — at archaeological digs in Israel, among the Mayans in Guatemala, in artifact drawers at the American Museum of Natural History in New York. Atran is Darwinian in his approach, which means he tries to explain behavior by how it might once have solved problems of survival and reproduction for our early ancestors. But it was not clear to him what evolutionary problems might have been solved by religious belief. Religion seemed to use up physical and mental resources without an obvious benefit for survival. Why, he wondered, was religion so pervasive, when it was something that seemed so costly from an evolutionary point of view?
The magic-box demonstration helped set Atran on a career studying why humans might have evolved to be religious, something few people were doing back in the ’80s. Today, the effort has gained momentum, as scientists search for an evolutionary explanation for why belief in God exists — not whether God exists, which is a matter for philosophers and theologians, but why the belief does.
This is different from the scientific assault on religion that has been garnering attention recently, in the form of best-selling books from scientific atheists who see religion as a scourge. In “The God Delusion,” published last year and still on best-seller lists, the Oxford evolutionary biologist Richard Dawkins concludes that religion is nothing more than a useless, and sometimes dangerous, evolutionary accident. “Religious behavior may be a misfiring, an unfortunate byproduct of an underlying psychological propensity which in other circumstances is, or once was, useful,” Dawkins wrote. He is joined by two other best-selling authors — Sam Harris, who wrote “The End of Faith,” and Daniel Dennett, a philosopher at
Tufts University who wrote “Breaking the Spell.” The three men differ in their personal styles and whether they are engaged in a battle against religiosity, but their names are often mentioned together. They have been portrayed as an unholy trinity of neo-atheists, promoting their secular world view with a fervor that seems almost evangelical.
Lost in the hullabaloo over the neo-atheists is a quieter and potentially more illuminating debate. It is taking place not between science and religion but within science itself, specifically among the scientists studying the evolution of religion. These scholars tend to agree on one point: that religious belief is an outgrowth of brain architecture that evolved during early human history. What they disagree about is why a tendency to believe evolved, whether it was because belief itself was adaptive or because it was just an evolutionary byproduct, a mere consequence of some other adaptation in the evolution of the human brain.
Which is the better biological explanation for a belief in God — evolutionary adaptation or neurological accident? Is there something about the cognitive functioning of humans that makes us receptive to belief in a supernatural deity? And if scientists are able to explain God, what then? Is explaining religion the same thing as explaining it away? Are the nonbelievers right, and is religion at its core an empty undertaking, a misdirection, a vestigial artifact of a primitive mind? Or are the believers right, and does the fact that we have the mental capacities for discerning God suggest that it was God who put them there?
In short, are we hard-wired to believe in God? And if we are, how and why did that happen?
“All of our raptures and our drynesses, our longings and pantings, our questions and beliefs . . . are equally organically founded,” William James wrote in “The Varieties of Religious Experience.” James, who taught philosophy and experimental psychology at
Harvard for more than 30 years, based his book on a 1901 lecture series in which he took some early tentative steps at breaching the science-religion divide.
In the century that followed, a polite convention generally separated science and religion, at least in much of the Western world. Science, as the old trope had it, was assigned the territory that describes how the heavens go; religion, how to go to heaven.
Anthropologists like Atran and psychologists as far back as James had been looking at the roots of religion, but the mutual hands-off policy really began to shift in the 1990s. Religion made incursions into the traditional domain of science with attempts to bring intelligent design into the biology classroom and to choke off human embryonic stem-cell research on religious grounds. Scientists responded with counterincursions. Experts from the hard sciences, like evolutionary biology and cognitive neuroscience, joined anthropologists and psychologists in the study of religion, making God an object of scientific inquiry.
The debate over why belief evolved is between byproduct theorists and adaptationists. You might think that the byproduct theorists would tend to be nonbelievers, looking for a way to explain religion as a fluke, while the adaptationists would be more likely to be believers who can intuit the emotional, spiritual and community advantages that accompany faith. Or you might think they would all be atheists, because what believer would want to subject his own devotion to rationalism’s cold, hard scrutiny? But a scientist’s personal religious view does not always predict which side he will take. And this is just one sign of how complex and surprising this debate has become.
Angels, demons, spirits, wizards, gods and witches have peppered folk religions since mankind first started telling stories. Charles Darwin noted this in “The Descent of Man.” “A belief in all-pervading spiritual agencies,” he wrote, “seems to be universal.” According to anthropologists, religions that share certain supernatural features — belief in a noncorporeal God or gods, belief in the afterlife, belief in the ability of prayer or ritual to change the course of human events — are found in virtually every culture on earth.
This is certainly true in the United States. About 6 in 10 Americans, according to a 2005 Harris Poll, believe in the devil and hell, and about 7 in 10 believe in angels, heaven and the existence of miracles and of life after death. A 2006 survey at
Baylor University found that 92 percent of respondents believe in a personal God — that is, a God with a distinct set of character traits ranging from “distant” to “benevolent.”
When a trait is universal, evolutionary biologists look for a genetic explanation and wonder how that gene or genes might enhance survival or reproductive success. In many ways, it’s an exercise in post-hoc hypothesizing: what would have been the advantage, when the human species first evolved, for an individual who happened to have a mutation that led to, say, a smaller jaw, a bigger forehead, a better thumb? How about certain behavioral traits, like a tendency for risk-taking or for kindness?
Atran saw such questions as a puzzle when applied to religion. So many aspects of religious belief involve misattribution and misunderstanding of the real world. Wouldn’t this be a liability in the survival-of-the-fittest competition? To Atran, religious belief requires taking “what is materially false to be true” and “what is materially true to be false.” One example of this is the belief that even after someone dies and the body demonstrably disintegrates, that person will still exist, will still be able to laugh and cry, to feel pain and joy. This confusion “does not appear to be a reasonable evolutionary strategy,” Atran wrote in “In Gods We Trust: The Evolutionary Landscape of Religion” in 2002. “Imagine another animal that took injury for health or big for small or fast for slow or dead for alive. It’s unlikely that such a species could survive.” He began to look for a sideways explanation: if religious belief was not adaptive, perhaps it was associated with something else that was.
Atran intended to study mathematics when he entered Columbia as a precocious 17-year-old. But he was distracted by the radical politics of the late ’60s. One day in his freshman year, he found himself at an antiwar rally listening to Margaret Mead, then perhaps the most famous anthropologist in America. Atran, dressed in a flamboyant Uncle Sam suit, stood up and called her a sellout for saying the protesters should be writing to their congressmen instead of staging demonstrations. “Young man,” the unflappable Mead said, “why don’t you come see me in my office?”
Atran, equally unflappable, did go to see her — and ended up working for Mead, spending much of his time exploring the cabinets of curiosities in her tower office at the American Museum of Natural History. Soon he switched his major to anthropology.
Many of the museum specimens were religious, Atran says. So were the artifacts he dug up on archaeological excursions in Israel in the early ’70s. Wherever he turned, he encountered the passion of religious belief. Why, he wondered, did people work so hard against their preference for logical explanations to maintain two views of the world, the real and the unreal, the intuitive and the counterintuitive?
Maybe cognitive effort was precisely the point. Maybe it took less mental work than Atran realized to hold belief in God in one’s mind. Maybe, in fact, belief was the default position for the human mind, something that took no cognitive effort at all.
While still an undergraduate, Atran decided to explore these questions by organizing a conference on universal aspects of culture and inviting all his intellectual heroes: the linguist
Noam Chomsky, the psychologist Jean Piaget, the anthropologists Claude Levi-Strauss and Gregory Bateson (who was also Margaret Mead’s ex-husband), the Nobel Prize-winning biologists Jacques Monod and Francois Jacob. It was 1974, and the only site he could find for the conference was at a location just outside Paris. Atran was a scraggly 22-year-old with a guitar who had learned his French from comic books. To his astonishment, everyone he invited agreed to come.
Atran is a sociable man with sharp hazel eyes, who sparks provocative conversations the way other men pick bar fights. As he traveled in the ’70s and ’80s, he accumulated friends who were thinking about the issues he was: how culture is transmitted among human groups and what evolutionary function it might serve. “I started looking at history, and I wondered why no society ever survived more than three generations without a religious foundation as its raison d’être,” he says. Soon he turned to an emerging subset of evolutionary theory — the evolution of human cognition.
Some cognitive scientists think of brain functioning in terms of modules, a series of interconnected machines, each one responsible for a particular mental trick. They do not tend to talk about a God module per se; they usually consider belief in God a consequence of other mental modules.
Religion, in this view, is “a family of cognitive phenomena that involves the extraordinary use of everyday cognitive processes,” Atran wrote in “In Gods We Trust.” “Religions do not exist apart from the individual minds that constitute them and the environments that constrain them, any more than biological species and varieties exist independently of the individual organisms that compose them and the environments that conform them.”
At around the time “In Gods We Trust” appeared five years ago, a handful of other scientists — Pascal Boyer, now at
Washington University; Justin Barrett, now at Oxford; Paul Bloom at Yale — were addressing these same questions. In synchrony they were moving toward the byproduct theory.
Darwinians who study physical evolution distinguish between traits that are themselves adaptive, like having blood cells that can transport oxygen, and traits that are byproducts of adaptations, like the redness of blood. There is no survival advantage to blood’s being red instead of turquoise; it is just a byproduct of the trait that is adaptive, having blood that contains hemoglobin.
Something similar explains aspects of brain evolution, too, say the byproduct theorists. Which brings us to the idea of the spandrel.
Stephen Jay Gould, the famed evolutionary biologist at Harvard who died in 2002, and his colleague Richard Lewontin proposed “spandrel” to describe a trait that has no adaptive value of its own. They borrowed the term from architecture, where it originally referred to the V-shaped structure formed between two rounded arches. The structure is not there for any purpose; it is there because that is what happens when arches align.
In architecture, a spandrel can be neutral or it can be made functional. Building a staircase, for instance, creates a space underneath that is innocuous, just a blank sort of triangle. But if you put a closet there, the under-stairs space takes on a function, unrelated to the staircase’s but useful nonetheless. Either way, functional or nonfunctional, the space under the stairs is a spandrel, an unintended byproduct.
“Natural selection made the human brain big,” Gould wrote, “but most of our mental properties and potentials may be spandrels — that is, nonadaptive side consequences of building a device with such structural complexity.”
The possibility that God could be a spandrel offered Atran a new way of understanding the evolution of religion. But a spandrel of what, exactly?
Hardships of early human life favored the evolution of certain cognitive tools, among them the ability to infer the presence of organisms that might do harm, to come up with causal narratives for natural events and to recognize that other people have minds of their own with their own beliefs, desires and intentions. Psychologists call these tools, respectively, agent detection, causal reasoning and theory of mind.
Agent detection evolved because assuming the presence of an agent — which is jargon for any creature with volitional, independent behavior — is more adaptive than assuming its absence. If you are a caveman on the savannah, you are better off presuming that the motion you detect out of the corner of your eye is an agent and something to run from, even if you are wrong. If it turns out to have been just the rustling of leaves, you are still alive; if what you took to be leaves rustling was really a hyena about to pounce, you are dead.
A classic experiment from the 1940s by the psychologists Fritz Heider and Marianne Simmel suggested that imputing agency is so automatic that people may do it even for geometric shapes. For the experiment, subjects watched a film of triangles and circles moving around. When asked what they had been watching, the subjects used words like “chase” and “capture.” They did not just see the random movement of shapes on a screen; they saw pursuit, planning, escape.
So if there is motion just out of our line of sight, we presume it is caused by an agent, an animal or person with the ability to move independently. This usually operates in one direction only; lots of people mistake a rock for a bear, but almost no one mistakes a bear for a rock.
What does this mean for belief in the supernatural? It means our brains are primed for it, ready to presume the presence of agents even when such presence confounds logic. “The most central concepts in religions are related to agents,” Justin Barrett, a psychologist, wrote in his 2004 summary of the byproduct theory, “Why Would Anyone Believe in God?” Religious agents are often supernatural, he wrote, “people with superpowers, statues that can answer requests or disembodied minds that can act on us and the world.”
A second mental module that primes us for religion is causal reasoning. The human brain has evolved the capacity to impose a narrative, complete with chronology and cause-and-effect logic, on whatever it encounters, no matter how apparently random. “We automatically, and often unconsciously, look for an explanation of why things happen to us,” Barrett wrote, “and ‘stuff just happens’ is no explanation. Gods, by virtue of their strange physical properties and their mysterious superpowers, make fine candidates for causes of many of these unusual events.” The ancient Greeks believed thunder was the sound of Zeus’s thunderbolt. Similarly, a contemporary woman whose cancer treatment works despite 10-to-1 odds might look for a story to explain her survival. It fits better with her causal-reasoning tool for her recovery to be a miracle, or a reward for prayer, than for it to be just a lucky roll of the dice.
A third cognitive trick is a kind of social intuition known as theory of mind. It’s an odd phrase for something so automatic, since the word “theory” suggests formality and self-consciousness. Other terms have been used for the same concept, like intentional stance and social cognition. One good alternative is the term Atran uses: folkpsychology.
Folkpsychology, as Atran and his colleagues see it, is essential to getting along in the contemporary world, just as it has been since prehistoric times. It allows us to anticipate the actions of others and to lead others to believe what we want them to believe; it is at the heart of everything from marriage to office politics to poker. People without this trait, like those with severe autism, are impaired, unable to imagine themselves in other people’s heads.
The process begins with positing the existence of minds, our own and others’, that we cannot see or feel. This leaves us open, almost instinctively, to belief in the separation of the body (the visible) and the mind (the invisible). If you can posit minds in other people that you cannot verify empirically, suggests Paul Bloom, a psychologist and the author of “Descartes’ Baby,” published in 2004, it is a short step to positing minds that do not have to be anchored to a body. And from there, he said, it is another short step to positing an immaterial soul and a transcendent God.
The traditional psychological view has been that until about age 4, children think that minds are permeable and that everyone knows whatever the child himself knows. To a young child, everyone is infallible. All other people, especially Mother and Father, are thought to have the same sort of insight as an all-knowing God.
But at a certain point in development, this changes. (Some new research suggests this might occur as early as 15 months.) The “false-belief test” is a classic experiment that highlights the boundary. Children watch a puppet show with a simple plot: John comes onstage holding a marble, puts it in Box A and walks off. Mary comes onstage, opens Box A, takes out the marble, puts it in Box B and walks off. John comes back onstage. The children are asked, Where will John look for the marble?
Very young children, or autistic children of any age, say John will look in Box B, since they know that’s where the marble is. But older children give a more sophisticated answer. They know that John never saw Mary move the marble and that as far as he is concerned it is still where he put it, in Box A. Older children have developed a theory of mind; they understand that other people sometimes have false beliefs. Even though they know that the marble is in Box B, they respond that John will look for it in Box A.
The adaptive advantage of folkpsychology is obvious. According to Atran, our ancestors needed it to survive their harsh environment, since folkpsychology allowed them to “rapidly and economically” distinguish good guys from bad guys. But how did folkpsychology — an understanding of ordinary people’s ordinary minds — allow for a belief in supernatural, omniscient minds? And if the byproduct theorists are right and these beliefs were of little use in finding food or leaving more offspring, why did they persist?
Atran ascribes the persistence to evolutionary misdirection, which, he says, happens all the time: “Evolution always produces something that works for what it works for, and then there’s no control for however else it’s used.” On a sunny weekday morning, over breakfast at a French cafe on upper Broadway, he tried to think of an analogy and grinned when he came up with an old standby: women’s breasts. Because they are associated with female hormones, he explained, full breasts indicate a woman is fertile, and the evolution of the male brain’s preference for them was a clever mating strategy. But breasts are now used for purposes unrelated to reproduction, to sell anything from deodorant to beer. “A Martian anthropologist might look at this and say, ‘Oh, yes, so these breasts must have somehow evolved to sell hygienic stuff or food to human beings,’ ” Atran said. But the Martian would, of course, be wrong. Equally wrong would be to make the same mistake about religion, thinking it must have evolved to make people behave a certain way or feel a certain allegiance.
That is what most fascinated Atran. “Why is God in there?” he wondered.
The idea of an infallible God is comfortable and familiar, something children readily accept. You can see this in the experiment Justin Barrett conducted recently — a version of the traditional false-belief test but with a religious twist. Barrett showed young children a box with a picture of crackers on the outside. What do you think is inside this box? he asked, and the children said, “Crackers.” Next he opened it and showed them that the box was filled with rocks. Then he asked two follow-up questions: What would your mother say is inside this box? And what would God say?
As earlier theory-of-mind experiments already showed, 3- and 4-year-olds tended to think Mother was infallible, and since the children knew the right answer, they assumed she would know it, too. They usually responded that Mother would say the box contained rocks. But 5- and 6-year-olds had learned that Mother, like any other person, could hold a false belief in her mind, and they tended to respond that she would be fooled by the packaging and would say, “Crackers.”
And what would God say? No matter what their age, the children, who were all Protestants, told Barrett that God would answer, “Rocks.” This was true even for the older children, who, as Barrett understood it, had developed folkpsychology and had used it when predicting a wrong response for Mother. They had learned that, in certain situations, people could be fooled — but they had also learned that there is no fooling God.
The bottom line, according to byproduct theorists, is that children are born with a tendency to believe in omniscience, invisible minds, immaterial souls — and then they grow up in cultures that fill their minds, hard-wired for belief, with specifics. It is a little like language acquisition, Paul Bloom says, with the essential difference that language is a biological adaptation and religion, in his view, is not. We are born with an innate facility for language but the specific language we learn depends on the environment in which we are raised. In much the same way, he says, we are born with an innate tendency for belief, but the specifics of what we grow up believing — whether there is one God or many, whether the soul goes to heaven or occupies another animal after death — are culturally shaped.
Whatever the specifics, certain beliefs can be found in all religions. Those that prevail, according to the byproduct theorists, are those that fit most comfortably with our mental architecture. Psychologists have shown, for instance, that people attend to, and remember, things that are unfamiliar and strange, but not so strange as to be impossible to assimilate. Ideas about God or other supernatural agents tend to fit these criteria. They are what Pascal Boyer, an anthropologist and psychologist, called “minimally counterintuitive”: weird enough to get your attention and lodge in your memory but not so weird that you reject them altogether. A tree that talks is minimally counterintuitive, and you might believe it as a supernatural agent. A tree that talks and flies and time-travels is maximally counterintuitive, and you are more likely to reject it.
Atran, along with Ara Norenzayan of the University of British Columbia, studied the idea of minimally counterintuitive agents earlier this decade. They presented college students with lists of fantastical creatures and asked them to choose the ones that seemed most “religious.” The convincingly religious agents, the students said, were not the most outlandish — not the turtle that chatters and climbs or the squealing, flowering marble — but those that were just outlandish enough: giggling seaweed, a sobbing oak, a talking horse. Giggling seaweed meets the requirement of being minimally counterintuitive, Atran wrote. So does a God who has a human personality except that he knows everything or a God who has a mind but has no body.
It is not enough for an agent to be minimally counterintuitive for it to earn a spot in people’s belief systems. An emotional component is often needed, too, if belief is to take hold. “If your emotions are involved, then that’s the time when you’re most likely to believe whatever the religion tells you to believe,” Atran says. Religions stir up emotions through their rituals — swaying, singing, bowing in unison during group prayer, sometimes working people up to a state of physical arousal that can border on frenzy. And religions gain strength during the natural heightening of emotions that occurs in times of personal crisis, when the faithful often turn to shamans or priests. The most intense personal crisis, for which religion can offer powerfully comforting answers, is when someone comes face to face with mortality.
In
John Updike’s celebrated early short story “Pigeon Feathers,” 14-year-old David spends a lot of time thinking about death. He suspects that adults are lying when they say his spirit will live on after he dies. He keeps catching them in inconsistencies when he asks where exactly his soul will spend eternity. “Don’t you see,” he cries to his mother, “if when we die there’s nothing, all your sun and fields and what not are all, ah, horror? It’s just an ocean of horror.”
The story ends with David’s tiny revelation and his boundless relief. The boy gets a gun for his 15th birthday, which he uses to shoot down some pigeons that have been nesting in his grandmother’s barn. Before he buries them, he studies the dead birds’ feathers. He is amazed by their swirls of color, “designs executed, it seemed, in a controlled rapture.” And suddenly the fears that have plagued him are lifted, and with a “slipping sensation along his nerves that seemed to give the air hands, he was robed in this certainty: that the God who had lavished such craft upon these worthless birds would not destroy His whole Creation by refusing to let David live forever.”
Fear of death is an undercurrent of belief. The spirits of dead ancestors, ghosts, immortal deities, heaven and hell, the everlasting soul: the notion of spiritual existence after death is at the heart of almost every religion. According to some adaptationists, this is part of religion’s role, to help humans deal with the grim certainty of death. Believing in God and the afterlife, they say, is how we make sense of the brevity of our time on earth, how we give meaning to this brutish and short existence. Religion can offer solace to the bereaved and comfort to the frightened.
But the spandrelists counter that saying these beliefs are consolation does not mean they offered an adaptive advantage to our ancestors. “The human mind does not produce adequate comforting delusions against all situations of stress or fear,” wrote Pascal Boyer, a leading byproduct theorist, in “Religion Explained,” which came out a year before Atran’s book. “Indeed, any organism that was prone to such delusions would not survive long.”
Whether or not it is adaptive, belief in the afterlife gains power in two ways: from the intensity with which people wish it to be true and from the confirmation it seems to get from the real world. This brings us back to folkpsychology. We try to make sense of other people partly by imagining what it is like to be them, an adaptive trait that allowed our ancestors to outwit potential enemies. But when we think about being dead, we run into a cognitive wall. How can we possibly think about not thinking? “Try to fill your consciousness with the representation of no-consciousness, and you will see the impossibility of it,” the Spanish philosopher Miguel de Unamuno wrote in “Tragic Sense of Life.” “The effort to comprehend it causes the most tormenting dizziness. We cannot conceive of ourselves as not existing.”
Much easier, then, to imagine that the thinking somehow continues. This is what young children seem to do, as a study at the Florida Atlantic University demonstrated a few years ago. Jesse Bering and David Bjorklund, the psychologists who conducted the study, used finger puppets to act out the story of a mouse, hungry and lost, who is spotted by an alligator. “Well, it looks like Brown Mouse got eaten by Mr. Alligator,” the narrator says at the end. “Brown Mouse is not alive anymore.”
Afterward, Bering and Bjorklund asked their subjects, ages 4 to 12, what it meant for Brown Mouse to be “not alive anymore.” Is he still hungry? Is he still sleepy? Does he still want to go home? Most said the mouse no longer needed to eat or drink. But a large proportion, especially the younger ones, said that he still had thoughts, still loved his mother and still liked cheese. The children understood what it meant for the mouse’s body to cease to function, but many believed that something about the mouse was still alive.
“Our psychological architecture makes us think in particular ways,” says Bering, now at Queens University in Belfast, Northern Ireland. “In this study, it seems, the reason afterlife beliefs are so prevalent is that underlying them is our inability to simulate our nonexistence.”
It might be just as impossible to simulate the nonexistence of loved ones. A large part of any relationship takes place in our minds, Bering said, so it’s natural for it to continue much as before after the other person’s death. It is easy to forget that your sister is dead when you reach for the phone to call her, since your relationship was based so much on memory and imagined conversations even when she was alive. In addition, our agent-detection device sometimes confirms the sensation that the dead are still with us. The wind brushes our cheek, a spectral shape somehow looks familiar and our agent detection goes into overdrive. Dreams, too, have a way of confirming belief in the afterlife, with dead relatives appearing in dreams as if from beyond the grave, seeming very much alive.
Belief is our fallback position, according to Bering; it is our reflexive style of thought. “We have a basic psychological capacity that allows anyone to reason about unexpected natural events, to see deeper meaning where there is none,” he says. “It’s natural; it’s how our minds work.”
Intriguing as the spandrel logic might be, there is another way to think about the evolution of religion: that religion evolved because it offered survival advantages to our distant ancestors. This is where the action is in the science of God debate, with a coterie of adaptationists arguing on behalf of the primary benefits, in terms of survival advantages, of religious belief.
The trick in thinking about adaptation is that even if a trait offers no survival advantage today, it might have had one long ago. This is how Darwinians explain how certain physical characteristics persist even if they do not currently seem adaptive — by asking whether they might have helped our distant ancestors form social groups, feed themselves, find suitable mates or keep from getting killed. A facility for storing calories as fat, for instance, which is a detriment in today’s food-rich society, probably helped our ancestors survive cyclical famines.
So trying to explain the adaptiveness of religion means looking for how it might have helped early humans survive and reproduce. As some adaptationists see it, this could have worked on two levels, individual and group. Religion made people feel better, less tormented by thoughts about death, more focused on the future, more willing to take care of themselves. As William James put it, religion filled people with “a new zest which adds itself like a gift to life . . . an assurance of safety and a temper of peace and, in relation to others, a preponderance of loving affections.”
Such sentiments, some adaptationists say, made the faithful better at finding and storing food, for instance, and helped them attract better mates because of their reputations for morality, obedience and sober living. The advantage might have worked at the group level too, with religious groups outlasting others because they were more cohesive, more likely to contain individuals willing to make sacrifices for the group and more adept at sharing resources and preparing for warfare.
One of the most vocal adaptationists is David Sloan Wilson, an occasional thorn in the side of both Scott Atran and Richard Dawkins. Wilson, an evolutionary biologist at the State University of New York at Binghamton, focuses much of his argument at the group level. “Organisms are a product of natural selection,” he wrote in “Darwin’s Cathedral: Evolution, Religion, and the Nature of Society,” which came out in 2002, the same year as Atran’s book, and staked out the adaptationist view. “Through countless generations of variation and selection, [organisms] acquire properties that enable them to survive and reproduce in their environments. My purpose is to see if human groups in general, and religious groups in particular, qualify as organismic in this sense.”
Wilson’s father was Sloan Wilson, author of “The Man in the Gray Flannel Suit,” an emblem of mid-’50s suburban anomie that was turned into a film starring Gregory Peck. Sloan Wilson became a celebrity, with young women asking for his autograph, especially after his next novel, “A Summer Place,” became another blockbuster movie. The son grew up wanting to do something to make his famous father proud.
“I knew I couldn’t be a novelist,” said Wilson, who crackled with intensity during a telephone interview, “so I chose something as far as possible from literature — I chose science.” He is disarmingly honest about what motivated him: “I was very ambitious, and I wanted to make a mark.” He chose to study human evolution, he said, in part because he had some of his father’s literary leanings and the field required a novelist’s attention to human motivations, struggles and alliances — as well as a novelist’s flair for narrative.
Wilson eventually chose to study religion not because religion mattered to him personally — he was raised in a secular Protestant household and says he has long been an atheist — but because it was a lens through which to look at and revivify a branch of evolution that had fallen into disrepute. When Wilson was a graduate student at
Michigan State University in the 1970s, Darwinians were critical of group selection, the idea that human groups can function as single organisms the way beehives or anthills do. So he decided to become the man who rescued this discredited idea. “I thought, Wow, defending group selection — now, that would be big,” he recalled. It wasn’t until the 1990s, he said, that he realized that “religion offered an opportunity to show that group selection was right after all.”
Dawkins once called Wilson’s defense of group selection “sheer, wanton, head-in-bag perversity.” Atran, too, has been dismissive of this approach, calling it “mind blind” for essentially ignoring the role of the brain’s mental machinery. The adaptationists “cannot in principle distinguish Marxism from monotheism, ideology from religious belief,” Atran wrote. “They cannot explain why people can be more steadfast in their commitment to admittedly counterfactual and counterintuitive beliefs — that Mary is both a mother and a virgin, and God is sentient but bodiless — than to the most politically, economically or scientifically persuasive account of the way things are or should be.”
Still, for all its controversial elements, the narrative Wilson devised about group selection and the evolution of religion is clear, perhaps a legacy of his novelist father. Begin, he says, with an imaginary flock of birds. Some birds serve as sentries, scanning the horizon for predators and calling out warnings. Having a sentry is good for the group but bad for the sentry, which is doubly harmed: by keeping watch, the sentry has less time to gather food, and by issuing a warning call, it is more likely to be spotted by the predator. So in the Darwinian struggle, the birds most likely to pass on their genes are the nonsentries. How, then, could the sentry gene survive for more than a generation or two?
To explain how a self-sacrificing gene can persist, Wilson looks to the level of the group. If there are 10 sentries in one group and none in the other, 3 or 4 of the sentries might be sacrificed. But the flock with sentries will probably outlast the flock that has no early-warning system, so the other 6 or 7 sentries will survive to pass on the genes. In other words, if the whole-group advantage outweighs the cost to any individual bird of being a sentry, then the sentry gene will prevail.
There are costs to any individual of being religious: the time and resources spent on rituals, the psychic energy devoted to following certain injunctions, the pain of some initiation rites. But in terms of intergroup struggle, according to Wilson, the costs can be outweighed by the benefits of being in a cohesive group that out-competes the others.
There is another element here too, unique to humans because it depends on language. A person’s behavior is observed not only by those in his immediate surroundings but also by anyone who can hear about it. There might be clear costs to taking on a role analogous to the sentry bird — a person who stands up to authority, for instance, risks losing his job, going to jail or getting beaten by the police — but in humans, these local costs might be outweighed by long-distance benefits. If a particular selfless trait enhances a person’s reputation, spread through the written and spoken word, it might give him an advantage in many of life’s challenges, like finding a mate. One way that reputation is enhanced is by being ostentatiously religious.
“The study of evolution is largely the study of trade-offs,” Wilson wrote in “Darwin’s Cathedral.” It might seem disadvantageous, in terms of foraging for sustenance and safety, for someone to favor religious over rationalistic explanations that would point to where the food and danger are. But in some circumstances, he wrote, “a symbolic belief system that departs from factual reality fares better.” For the individual, it might be more adaptive to have “highly sophisticated mental modules for acquiring factual knowledge and for building symbolic belief systems” than to have only one or the other, according to Wilson. For the group, it might be that a mixture of hardheaded realists and symbolically minded visionaries is most adaptive and that “what seems to be an adversarial relationship” between theists and atheists within a community is really a division of cognitive labor that “keeps social groups as a whole on an even keel.”
Even if Wilson is right that religion enhances group fitness, the question remains: Where does God come in? Why is a religious group any different from groups for which a fitness argument is never even offered — a group of fraternity brothers, say, or Yankees fans?
Richard Sosis, an anthropologist with positions at the
University of Connecticut and Hebrew University of Jerusalem, has suggested a partial answer. Like many adaptationists, Sosis focuses on the way religion might be adaptive at the individual level. But even adaptations that help an individual survive can sometimes play themselves out through the group. Consider religious rituals.
“Religious and secular rituals can both promote cooperation,” Sosis wrote in American Scientist in 2004. But religious rituals “generate greater belief and commitment” because they depend on belief rather than on proof. The rituals are “beyond the possibility of examination,” he wrote, and a commitment to them is therefore emotional rather than logical — a commitment that is, in Sosis’s view, deeper and more long-lasting.
Rituals are a way of signaling a sincere commitment to the religion’s core beliefs, thereby earning loyalty from others in the group. “By donning several layers of clothing and standing out in the midday sun,” Sosis wrote, “ultraorthodox Jewish men are signaling to others: ‘Hey! Look, I’m a haredi’ — or extremely pious — ‘Jew. If you are also a member of this group, you can trust me because why else would I be dressed like this?’ ” These “signaling” rituals can grant the individual a sense of belonging and grant the group some freedom from constant and costly monitoring to ensure that their members are loyal and committed. The rituals are harsh enough to weed out the infidels, and both the group and the individual believers benefit.
In 2003, Sosis and Bradley Ruffle of Ben Gurion University in Israel sought an explanation for why Israel’s religious communes did better on average than secular communes in the wake of the economic crash of most of the country’s kibbutzim. They based their study on a standard economic game that measures cooperation. Individuals from religious communes played the game more cooperatively, while those from secular communes tended to be more selfish. It was the men who attended synagogue daily, not the religious women or the less observant men, who showed the biggest differences. To Sosis, this suggested that what mattered most was the frequent public display of devotion. These rituals, he wrote, led to greater cooperation in the religious communes, which helped them maintain their communal structure during economic hard times.
In 1997, Stephen Jay Gould wrote an essay in Natural History that called for a truce between religion and science. “The net of science covers the empirical universe,” he wrote. “The net of religion extends over questions of moral meaning and value.” Gould was emphatic about keeping the domains separate, urging “respectful discourse” and “mutual humility.” He called the demarcation “nonoverlapping magisteria” from the Latin magister, meaning “canon.”
Richard Dawkins had a history of spirited arguments with Gould, with whom he disagreed about almost everything related to the timing and focus of evolution. But he reserved some of his most venomous words for nonoverlapping magisteria. “Gould carried the art of bending over backward to positively supine lengths,” he wrote in “The God Delusion.” “Why shouldn’t we comment on God, as scientists? . . . A universe with a creative superintendent would be a very different kind of universe from one without. Why is that not a scientific matter?”
The separation, other critics said, left untapped the potential richness of letting one worldview inform the other. “Even if Gould was right that there were two domains, what religion does and what science does,” says Daniel Dennett (who, despite his neo-atheist label, is not as bluntly antireligious as Dawkins and Harris are), “that doesn’t mean science can’t study what religion does. It just means science can’t do what religion does.”
The idea that religion can be studied as a natural phenomenon might seem to require an atheistic philosophy as a starting point. Not necessarily. Even some neo-atheists aren’t entirely opposed to religion. Sam Harris practices Buddhist-inspired meditation. Daniel Dennett holds an annual Christmas sing-along, complete with hymns and carols that are not only harmonically lush but explicitly pious.
And one prominent member of the byproduct camp, Justin Barrett, is an observant Christian who believes in “an all-knowing, all-powerful, perfectly good God who brought the universe into being,” as he wrote in an e-mail message. “I believe that the purpose for people is to love God and love each other.”
At first blush, Barrett’s faith might seem confusing. How does his view of God as a byproduct of our mental architecture coexist with his Christianity? Why doesn’t the byproduct theory turn him into a skeptic?
“Christian theology teaches that people were crafted by God to be in a loving relationship with him and other people,” Barrett wrote in his e-mail message. “Why wouldn’t God, then, design us in such a way as to find belief in divinity quite natural?” Having a scientific explanation for mental phenomena does not mean we should stop believing in them, he wrote. “Suppose science produces a convincing account for why I think my wife loves me — should I then stop believing that she does?”
What can be made of atheists, then? If the evolutionary view of religion is true, they have to work hard at being atheists, to resist slipping into intrinsic habits of mind that make it easier to believe than not to believe. Atran says he faces an emotional and intellectual struggle to live without God in a nonatheist world, and he suspects that is where his little superstitions come from, his passing thought about crossing his fingers during turbulence or knocking on wood just in case. It is like an atavistic theism erupting when his guard is down. The comforts and consolations of belief are alluring even to him, he says, and probably will become more so as he gets closer to the end of his life. He fights it because he is a scientist and holds the values of rationalism higher than the values of spiritualism.
This internal push and pull between the spiritual and the rational reflects what used to be called the “God of the gaps” view of religion. The presumption was that as science was able to answer more questions about the natural world, God would be invoked to answer fewer, and religion would eventually recede. Research about the evolution of religion suggests otherwise. No matter how much science can explain, it seems, the real gap that God fills is an emptiness that our big-brained mental architecture interprets as a yearning for the supernatural. The drive to satisfy that yearning, according to both adaptationists and byproduct theorists, might be an inevitable and eternal part of what Atran calls the tragedy of human cognition.
Robin Marantz Henig, a contributing writer, has written recently for the magazine about the neurobiology of lying and about obesity.

Rosewood