Page 1476 – Christianity Today (2024)

Michael Robbins

The crumpled tissues and loose change of the vernacular.

Page 1476 – Christianity Today (1)

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

Contemporary American poetry has a crush on the crumpled tissues and loose change of the vernacular—idiom, platitude, cliché. Consider some recent book titles: Quick Question (John Ashbery); Nice Weather (Frederick Seidel); Just Saying (Rae Armantrout). These are examples of what the linguist Roman Jakobson called the phatic function of language—interjections, small talk—designed to check if the channel of communication is working. The Romantic-modernist revolution that opened poetry to "the language of real men" (Wordsworth) and words that people "actually say" (Pound) culminates in poems with lines like "Thanks, Ray, this is just what the doctor ordered" and "Don't come in here all bright-eyed and bushy-tailed."

Page 1476 – Christianity Today (3)

Meme (Kuhl House Poets)

Susan Wheeler (Author)

University Of Iowa Press

87 pages

$16.98

These last are from Susan Wheeler's "Maud Poems," the first of three elegiac sequences that make up her fifth collection, Meme—not that they're elegies in a traditional sense. "The Maud Poems," for instance, combine stock expressions favored by Wheeler's mother, distillations of a lifetime's worth of penny wisdom borrowed from other mouths, with lyric effusions of starkly different register: "she's already spilled the beans" is set against "an owl, recalcitrant / in its non-hooting state."

There is no condescension here—who among us, reduced to our most hackneyed formulae, would come off better? By highlighting precisely what was least individual, most communal, about her mother, Wheeler reminds us that it is our initiation into language that makes us human. Maud's idioms mark her as a person of a certain age, a particular temperament—"Well, they went bloody blue blazes through their last dollar before you could say boo"—while her daughter's idiom appropriates them for art. There is something of the impulse of Language poetry here ("Attest—ament, filament, adamant, keen"; its closest relative might be Lyn Hejinian's My Life).

The book's title refers to a pseudo-concept popularized by intellectual featherweight Richard Dawkins. A meme is supposedly the cultural analogue of a gene, transmitting cultural information, responsible for the spread of songs and catchphrases and jeggings. In Wheeler's lexicon, it represents the idea that language is a virus, and the wasting away of generations is how it transmits. It's not perfect; its basic reproduction rate varies. As David Shields puts it in How Literature Saved My Life, "Language is all we have to connect us, and it doesn't, not quite."

And Wheeler's got it bad: in Meme, I'm pleased to report, she's even bringing the limerick back:

I picked up a gal in a bar.
She said she'd ignore my cigar.
But when I was done
Relieving my gun
She said I was not up to par.

Wheeler has always been bouncier than most of her like-minded frenetic post-Language peers. Alert to what the toxic glow of Fruity Pebbles tells us about capitalism, she loves a good bubblegum jingle. In certain moods she's closer to Frederick Seidel than Susan Howe, penning cracked power ballads her parents might have danced to on a boardwalk in an alternate universe:

If I had a way to make you live with me again
—Even as a rabbit, or a wren (if all that's true)—
I wouldn't see at all that girl against the wall.
You've a right to cause me trouble now, I know.

What's troubling about these poems is their implication that language is a function of mourning, rather than the other way around—that, as Nietzsche said, "What we find words for is something already dead in our hearts." Or as Wheeler's Emersonian sensibility has it: "Want to go watch a kibitzer crumble / In the puke-green pour of the moon?" Her grief gets physical, and while it might evade adequate expression, it remains indexed to the motion of words:

I am tired. Today
I moved a book from its shelf
to the bed. The span
of its moving was vast.

A lifespan—a kind of book—is vast; it is a brief movement across a room.

Like Ashbery, an obvious influence, Wheeler kibitzes and chats while her informal colloquies crumble and deliquesce. And like Ashbery in recent years, Wheeler occasionally dips into a melancholy and pseudo-archaic register:

When will you go away,
oh piercing, piercing wind?
When will at rest I be again?
Oh sleep that will not rain on me,
oh sleep that nothing brings.

Oh, when will a face appear
that cancels full th'other?
Or will there be no more for me
of anodyne palaver?

Westron wynde, when wilt thou blow, the small raine down can raine. This kind of ventriloquism, not quite leached of irony but still evocative of less relentless pleasures, is voguish at the moment. Tom Pickard is, in my view, the master. In "Hawthorn," from Ballad of Jamie Allan, he writes:

there is a hawthorn on a hill
there is a hawthorn growing
it set its roots against the wind
the worrying wind that's blowing
its berries are red its blossom so white
I thought that it was snowing

It would be lovely to have more poems from Wheeler in this mode, or at least more that exploit her winning facility for rhyme, and perhaps fewer that till the exhausted soil of "experimental" fields:

  1. Anabaptists
    1. field field to
    2. lip on a / in a daisy
    3. pond muck
  2. Curtailing assumptions such that
    1. frog muck
    2. panopticon the hazards
    3. signage escalator mutant tut

After such escalator mutant tut, what forgiveness? I know it's bad form to say so, but fifty years after The Tennis Court Oath, this sort of thing is just possibly beginning to seem a bit rote. Certainly someone as lyrically capable—and as capable of lyrical subversion—as Wheeler needn't clutch so at the au courant. "It was the winter of the Z-pack" is startling in its sabotage of romantic anticipation. The lyric speaker of these poems gets "smashed by a Prius on a wild goose / chase" and still manages to affirm the sight of a "halo against the light."

But her openness to the possibilities of poetry regardless of tribal affiliation is one of Wheeler's virtues. "Such is the state of our poetry caught in my throat on its way / to my mouth, why not do everything," she writes toward the end of the book, before concluding: "but of course we do nothing." When third-hand experimentation is the norm, in life as in poetry, everything can look an awful lot like nothing. In these spring-loaded poems, Wheeler honors the less than everything that gets done in a life by infusing elegy with verve, anachronism with new-minted coin. "Let's make like we're not through," she writes, and it's all any of us can do—go on making things, making likenesses, as if we were not already finished, not already broken up, not already out the other side, like so many people we knew, like all the things they said.

Michael Robbins is the author of Alien vs. Predator (Penguin).

Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromMichael Robbins

John H. McWhorter

What we learn from California Indian languages.

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

When Europeans encountered what is today California, that area contained 78 different languages. Not "dialects" of some single "Indian tongue," or even just three or four, but 78 languages as mutually unintelligible as French, Greek, and Japanese.

Page 1476 – Christianity Today (5)

California Indian Languages

Victor Golla (Author)

University of California Press

400 pages

$90.44

Victor Golla's California Indian Languages is a lush and handy primer on what is known about all of these languages, but a volume like this is as much an elegy as a survey. Not a single name of any of these languages would ring a bell to laymen—Pomo? Miwok? Wiyot?—and this is partly because almost all of them will be extinct within another generation. As of 2010, for only a few dozen were there were fluent speakers alive, and most of them are elderly. European languages, especially English, intruded upon Native Americans' linguistic repertoire starting centuries ago, and eventually were often forced upon them on the pain of physical abuse in schools. Today, English is the everyday language for almost all Native Americans.

Once even a single generation grows up without living in a language, it is almost inevitable that no generation ever will again. The ability to learn a language well ossifies after the teen years, as most of us can attest from personal experience. If we do manage to wangle a certain ability in a language as adults, the chances are remote that we would use this second language with the spontaneous intimacy of parents and children at home. The generation of, say, Miwoks who only know a few words and expressions of their parents' native language—or enough to manage a very basic conversation but that's all—will not pass even this severely limited ability on to the next generation.

Yes, there are programs seeking to revivify these fascinating languages. Some groups have classes. In others, there are master-apprentice programs, in which elders teach younger people the ancestral language within a home setting. One reads about such efforts in the media rather frequently nowadays, in the wake of various books over the past twenty years calling attention to how many of the world's languages are on the brink of disappearing. By one estimate, only 600 of the current 6,000 will exist in a hundred years.

Golla's book, unintentionally, suggests that happenstance aspects of linguistic culture in indigenous California made the task of reviving these languages even harder than it might be otherwise. Ironically, one of the factors was something that many would find rather romantic in itself: Native Americans in California considered languages to be spiritually bound to the areas they were spoken in. This seemingly innocuous aspect of cosmology had a chain-reaction impact on the future of the languages.

First, in this world it was considered culturally incorrect to speak any but the local language when in its territory. This meant that those who traveled to another place—and few did, given this strong sense of local rootedness—made use of interpreters rather than learning the other language themselves. Hence California Native American languages were rarely learned by adults.

As it happens, when adults learn a language in large numbers and there is no written standard or educational system enshrining its original form, it becomes less complex. English lacks the three genders of its sister language German because large numbers of Vikings invaded and settled the island starting in the 8th century and married local women. They exposed children to their approximate Old English to such an extent that this kind of English became the norm. I am writing, then, in a language descended from "bad" Old English.

That this kind of thing happened so rarely in indigenous California had a secondary effect: the languages tend to be more complex than anything an English speaker would imagine. Taking lessons in Yokuts, spoken in the southern Central Valley, you would learn that the past tense ending is -ish. So: pichiw is "grab," and pichiw-ish is "grabbed." But it turns out that grab is unusual in Yokuts: not just some but most Yokuts verbs are irregular. Add -ish to ushu "steal," and it morphs into osh-shu, with a new o instead of u and a double sh. Add -ish to toyokh "to doctor" and it's tuyikh-shi. You have to know precisely how each verb gets deformed—and that's just two verbs.

In Salinan over on the coast, there's no regular way to make a plural: every noun resembles the handful in English like men and geese. House is tam, houses: temhal. One dog: khuch. More of them: khosten—and this is how it is for all nouns. All of the California languages are like this in various ways. A grammatical description of any one of them is, in its way, as awesome as a Gothic cathedral.

But this means that, past childhood, learning these languages is really tough. English speakers find it hard enough to get past Spanish putting adjectives after nouns and marking its nouns with gender. But when we get to languages where instead of just saying go or put you have to also append one of several dozen suffixes indexing exactly what the goer or putter was like and the material nature of what was gone or put—e.g., in Karuk, putting on a glove requires a suffix marking that what happened was "in through a tubular space"—we are faced with a task few busy adults will be in a position to master.

Many years ago I was assigned to spend a few weeks helping speakers of one of the varieties of Pomo recover their language. We had a good time. However, here was a language in which to say "She didn't stay very long and came back," you have to phrase it as, roughly, "Long time it wasn't, she sat and back here-went," putting the verb at the end instead of in the middle and also mouthing sounds unfamiliar to speakers of English or even Spanish (or Russian or Chinese!). I couldn't help thinking that for them—or me—to actually breathe life into this language now surviving only on the page was not going to happen. And they knew it. One told me that she was just hoping to be able to know enough of the language that her descendants could feel a connection to the past and their place in the world.

This struck me as a healthy and achievable goal. Books like Golla's, demonstrating the amazing complexity of these languages, also show that we must alter our sense of what it is to "know" a language. When someone says they play the piano, we do not assume they play like Horowitz. In the same way, in a new world there will exist languages that thrive as abbreviations of what they once were, useable by modern adults who seek a cultural signpost rather than a daily vehicle of communication. Anecdotally, this is already effectively the case with revived languages such as Irish Gaelic and Maori. Their new speakers, using the languages in cultural activities and even in the media to an extent, nevertheless use English much more. They are rarely speaking the language in as full a form as their ancestors did. Yet no one would suppose that this invalidates the effort.

It is unlikely that 6,000 languages will continue to be passed down in fuller form than this, and they will often survive in an even more restricted sense: flash cards, expressions, songs, perhaps some strictly "101" grammar. The difficulty of mastering languages beyond childhood is but one reason why. Amidst globalization, a few widely spoken languages dominate in print, media, and popular music and are necessary to economic success. In this, they inevitably come to be associated with status and sophistication.

The educated Westerner, and especially the anthropologist or linguist, cherishes the indigenous as "authentic" and as a token of diversity in its modern definition. These are laudable perspectives in many ways but are not always shared by those to whom an indigenous language is simply the one they learned on their mother's knee, as ordinary as English is to us. Such a person may not feel especially authentic or diverse to themselves. Often they prioritize increasing their income and embracing the wider world—especially for their children.

The flourishing of 6,000 languages points us back to a much earlier stage of humankind in which all people were distributed in small groups like those in indigenous California, where the basic unit was the "tribelet" of a few hundred people. In the modern world, for better or for worse—and quite often worse—people are coming together. The only question would be why there wouldn't be fewer languages. However, if most of the world's languages cannot continue to be spoken, surely we must utilize the advantage of writing to document what once was.

The fashion is to justify this on the basis of the languages recording the unique worldviews of their speakers. But that notion is more fraught than often supposed. Say we celebrate Karuk for showing that its speakers were especially sensitive to things like tubular insertion. Is the American white kid somewhere in Indiana really less attuned to the snug feeling of getting his fingers into gloves than a Karuk kid in California once was, even if English doesn't have a suffix with that meaning?

Rather, languages randomly mark some things more than others. Call California Native Americans fascinatingly connected to space and direction, but then be prepared to call them blind to the difference between the hill versus just a hill—most Native American languages leave that particular distinction largely to context. We assume Native Americans felt that nuance as deeply as we do even if their grammars do not happen to explicitly mark it with words or suffixes. Just as obviously, for Yokuts to have almost no regular verbs says nothing about how its speakers process existence.

Dying languages should be documented not as psychological templates but as awesomely alternate randomnesses from what European languages happen to be. Golla's book is valuable also, then, in its diligent chronicle of the researchers over the centuries who have dedicated themselves to the task of simply getting on paper how these languages work. One of the most resonant photos in the book—from almost a hundred years ago—is of founding California language scholar Alfred Kroeber, longtime anthropology professor at the University of California at Berkeley, who got down the basic structure of dozens of California languages during his career. The snapshot, unusually for an era in which smiling for photographs was not yet common coin, captures a man grinning in the great outdoors—a man who clearly relished his mission.

And quite a mission it was. A language is a markedly huge business. First there is the basic grammatical machinery of the kind described above—but then there are the wrinkles. In English, we say one fish, two fish, okay—so fish is irregular: no two fishes. But then what about the Catholic Feast of the Seven Fishes? Try explaining that to a foreigner—such as to a Japanese one I know who also, despite her very good English, mentioned an obese man whose "meat" was hanging over the edges of a chair. We natives would say "flesh"—but why? "Meat" makes perfect sense: that we happen to prefer to say "flesh" or "flab" is just serendipity. You can say I'm frying some eggs, or I'm frying up some eggs. They don't mean the same thing—note that the version with up implies that the eggs will be ready for you to eat soon. But if you were teaching someone English, how likely would you be to get to that nuance?

To speak a language in full is to have full control over little things like that, and it's the rare outsider whose grammatical research can get down to details this fine. Even when well documented with a grammatical description and a dictionary, a great deal of what a language was has still been lost, just as a cat's skeleton cannot tell us that cats hold their tails in the air and curl up when they sleep.

For reasons of this kind, some insist that all efforts be made to keep such languages actually spoken, as "living things" rather than archival displays. However, Golla's book gives ample coverage to revival efforts, and the sad fact is that there is not a single report of a language that was once dying but has now been successfully passed on to a new generation. For all but a few lucky cases where happenstance has kept the language alive to the present day, documentation may be the best we can do.

In this light something bears mentioning that linguists traditionally step around. It is often implied that a great diversity of languages being spoken in the world is beneficial in the same way that genetic diversity is within a population. This, however, is more stated than demonstrated. If there had only ever been one language among all of the world's peoples, and all people could converse wherever they went, how commonly would people have regretted that there weren't thousands of mutually unintelligible languages? All humans could converse—who would have deemed that a disadvantage? Or, who would have said that it would be better if all humans had some other language alongside the universal one that only some people knew?

That is, amidst the downsides of language loss—including that most of those that die will be the smaller, indigenous ones—there are some benefits to there being fewer. A statement like that is understandably difficult to embrace for people watching generations of their own people grow up without something as central to cultural identity as their own language, as well as for scholars and activists who are equally dismayed at same. However, at least we have the technology to get on record a good deal of what the lost languages were like, and California Indian Languages is a perfect introduction to this record as it currently exists for 78 vastly different ways of talking.

John H. McWhorter teaches at Columbia University and is a contributing editor of The New Republic. He is the author most recently of What Language Is (Gotham).

Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromJohn H. McWhorter

Alister Chapman

Cricket and the aftermath of colonialism.

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

In 1989, the Derbyshire county cricket team played a local high school. Think Red Sox against Greenfield High. The professionals batted first, scoring a formidable total in a game where each side would bat one inning. (In cricket, runs are more easily come by than in baseball, and a team bats until all but one of the players are out. Scores of 300 or more are common.) When the school came out to bat, all eyes were on Derbyshire’s Michael Holding, a fast bowler who played for the world-beating West Indies. In a game where there is no equivalent of the pitcher’s mound, fast bowlers will run in before they bowl, gathering pace for thirty yards or more before hurling the ball towards the heavily padded batsman. That the ball typically bounces before it reaches its target makes things even more interesting, with bowlers like Holding able to bowl the ball short and make it fly up toward their opponent’s head. Holding had mercy on the schoolboys, however, trotting in and sending down very playable balls.

Page 1476 – Christianity Today (7)

Legend of Pradeep Mathew

Shehan Karunatilaka (Author)

Graywolf Press

414 pages

$18.00

It wasn’t long before Derbyshire had dismissed the school’s best batsmen, and the tail-enders were coming in. Last of all came the youngest and smallest of the lot. He was in the team as a bowler, but one very different from Michael Holding. For while Holding used speed and power to beat the batsmen, the boy used guile. He was what is known as a spin bowler. Spin bowlers take just a few steps before they release the ball, but a variety of grips on the ball and a flick of the wrist can make for surprising results once the ball is in the air and especially after it has hit the ground. Spin bowlers are cricket’s artists.

On that particular day, this young spin bowler had claimed the most famous scalp of his career: Michael Holding. But when the boy came in to bat, Holding saw who it was and decided to play with him. He walked all the way to the back fence and steamed in to bowl. The ball he released was, in the end, just as gentle as those he had been serving up all afternoon. But I doubt it was as much fun for the boy as it was for Holding.

Those who were there that day could see precisely what was going on. But, as in baseball, a lot of what happens in cricket happens so far from the crowd that they have little idea. The subtleties of spin are almost always lost. People can see that there is a contest between the one who throws the ball and the one who has to hit it, but that’s about all. Cricket is a different kind of spectator sport from, say, basketball. The game is important, but the experience of being there just as much so.

In England, cricket is a game for watching on a lazy afternoon. You can turn and chat to your neighbor without worrying that you’ll miss too much. Or you can sit quietly and watch the players run back and forth, white on green, and absorb the atmosphere. Nostalgia comes easily, with memories of the peaceful green spaces of youth. Prime minister John Major once mobilized anxiety about European integration by painting a picture of an unchanging Britain of “long shadows on county grounds [and] warm beer.”

But where Michael Holding grew up, things were different. Cricket had been introduced to the Caribbean by English colonizers, who cast themselves as gentlemen but ran slave plantations. For the black population of Jamaica, Barbados, Trinidad, and Tobago, cricket offered further English discipline and English fair play. The play, however, was not always fair. Colonial clubs operated color bars. International teams had quotas for whites. With the dawn of international cricket, rules were made in England and sometimes for England. In Kingston, cricketing memories were sour as well as sweet. And as cricket spread throughout Britain’s empire it became a tool of local discrimination too, with princes in India lording it over the Indian game just as the English elites did back in England

Eventually, the colonials beat the conquerors. First were the Australians: the most famous trophy in cricket is a tiny urn containing the ashes from a wicket ceremonially burned after the Australians won in London in 1882. But then it was the turn of India, South Africa, Pakistan, the West Indies, New Zealand, and Sri Lanka. Cricket became a matter of national pride. The rivalry between Pakistan and India is immense. In 1990, the sport caused ethnic and political tension in England when a government minister suggested a “cricket test” of national loyalty: immigrants who continued to support the team from their country of origin rather than England were to be deemed insufficiently British.

The global home of cricket is now the Indian subcontinent. London’s Guardian reported that a billion people watched the India-Pakistan semi-final in the 2011 world cup. Tens of millions watch the Indian Premier League, which has adopted a shorter form of the game where matches last less than three hours. The crowds are not sipping tea and listening to birdsong. Advertisers compete to sponsor teams, with logos emblazoned on multi-colored shirts. Players make more per week than in any league except the NBA.

It is appropriate, then, that the latest important contribution to the literature on cricket comes from this part of the world. Shehan Karunatilaka is a Sri Lankan living in Singapore. The Legend of Pradeep Mathew, his first book, tells the story of a journalist’s desire to write a book on Pradeep Mathew, a fictional Sri Lankan spin bowler. The journalist, W. G. Karunasena, is an alcoholic. His work on the book is a race against liver failure.

Karunasena saw Mathew’s brilliance while reporting on Sri Lankan cricket, but was puzzled by how few games he had played for his country—and by his mysterious disappearance. The book is a quest to uncover the mystery. It is not a happy story. Mathew left the game and went underground after extorting money from a corrupt official—a nod in the direction of the gambling that has tarnished the image of the game, not least in the Indian subcontinent.

Just as sad is the ethnic prejudice that runs through the book, with Mathew facing opposition as a Tamil from the Sinhalese who dominate cricket in Sri Lanka. Karunatilaka highlights England’s sins—”England will spend centuries working off their colonial sins by performing miserably at sport”—but Sri Lankans don’t come off much better. Tamil terrorism forms part of the backdrop for the story.

Yet the book is also filled with humor and warmth. Karunasena’s friends are kind, quirky, and often witty. His wife is devoted, and even his estranged son returns home. Beauty comes from cricket. Karunasena loves his family and friends, but sport is less complicated and offers more moments of perfection and rapture. In a crude paragraph early in the book, Karunatilaka tells his readers that if they have never seen a cricket match or have and wish they hadn’t, “then this book is for you.” But people outside the cricketing commonwealth will find it hard to put the pieces together. References to Botham, Boycott, Bradman, Khan, Muralitharan, Tendulkar, and Warne will be lost on readers who didn’t grow up spending happy hours watching the game on TV. Anyone who enjoys sports, however, will be able to appreciate Karunatilaka’s delighted descriptions and diagrams of spin bowling. The floater, leg break, googly, flipper, armball, lissa, carrom flick, and (most special of all) the double bounce ball are all here, explained with awe and wonder. Mathew can behave like an idiot, but he bowls like a god.

And that, for Karunasena at least, is life. Answering the question of whether sport has any use or value, he says:

Of course there is little point to sports. But, at the risk of depressing you, let me add two more cents. There is little point to anything. In a thousand years, grass will have grown over all our cities. Nothing of anything will matter.

Left-arm spinners cannot unclog your drains, teach your children or cure disease. But once in a while, the very best of them will bowl a ball that will bring an entire nation to its feet. There may be no practical use in that, but there is most certainly value.

Or, as the dying journalist puts it near the end, “Unlike life, sport matters.” Karunasena becomes a picture of human existence. He gives up drink for a while, but then gives in. His book is unfinished; the mystery is solved only after his death.

Many will enjoy the rich picture of modern Sri Lanka that emerges in The Legend of Pradeep Mathew, despite its sad anthropology. But if you want to learn about cricket, you might do better to pick up the Duke University Press edition of C. L. R. James’ 1963 classic Beyond a Boundary, which comes with a three-page explanation of the game at the beginning. James was raised in Trinidad, where he experienced both the joy and the injustice of cricket. He excelled with ball and books, moving to England where he became a cricket correspondent for the Guardian and a left-wing social critic. Beyond a Boundary tells his story and that of West Indian cricket. There is much to lament. But there is hope, too, the final page relating the story of a quarter of a million Australians taking to the streets to bid farewell to a touring West Indian team. The vision of cricket as a force for international good was warped but not all wrong. The dying Karunasena recognized that, too.

Alister Chapman, associate professor of history at Westmont College, is the author of Godly Ambition: John Stott and the Evangelical Movement (Oxford Univ. Press).

Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromAlister Chapman

Alan Jacobs

What is a “graphic novel”?

Page 1476 – Christianity Today (8)

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

Most of my fellow teachers of literature know that students often think of almost any book-length narrative work as a "novel." A paper might begin, "Augustine writes in his novel The Confessions …" or "Homer's Iliad is a novel that …." This is not a major intellectual failing, of course, but it should remind us of the extent to which the novel has become so dominant a genre that common readers think of it simply as narrative, or lengthy narrative, itself. It should also be a reminder to teachers that time devoted to explaining the history and uses of literary genre is time well spent.

This particular inexactitude happens in non-academic settings too, and indeed a new version of it has recently arisen. Stephen Weiner's Faster Than a Speeding Bullet: The Rise of the Graphic Novel refers to Art Spiegelmann's Maus—an account of the author's father's experience in Auschwitz—as a graphic novel. Similarly, we might consider Dotter of Her Father's Eyes, a recent book by Mary M. Talbot and Bryan Talbot. On Amazon.com you may find it in the "Graphic Novel" category; its Wikipedia page, at least as I write, begins "Dotter of Her Father's Eyes is a 2012 graphic novel"—but then goes on to add, in the next sentence, "It is part memoir, and part biography of Lucia Joyce, daughter of modernist writer James Joyce." That the second sentence is not seen to contradict the first one reminds us once more how the word "novel" is commonly used; but it also reveals the limitations of our descriptive and critical vocabulary for this new form. The genres of graphic narrative proliferate beyond our ability to account for them.

Major comic artists like Will Eisner—in his Comics and Sequential Art and Graphic Storytelling and Visual Narrative (1985)—and Scott McCloud—in his Understanding Comics (1993) and Making Comics (2006)—have done yeoman work in explaining, for a wide readership but especially for would-be artists, the visual languages of graphic storytelling. Those are superb and, for anyone seriously interested in the subject, indispensable books. There is also a burgeoning academic and critical literature on graphic storytelling, as exemplified in The Comics Studies Reader (2009), edited by Jeet Heer and Kent Worcester. But we still struggle, I think, to know how best to write about graphic narrative—especially in that odd genre called the "book review."

A reviewer will want to say something about the shape of the story: its plot and structure, the way it organizes time and event. The adequacy and appropriateness of the language should be considered, as should those of the artwork. By "appropriateness" I mean, to use an old word, decorum, fitness: Do the language and the images fit the shape of the story? If they do not, does that indecorum seem meant? Is the resulting tension productive, or not? And then the reviewer should ask how the language and artwork interact. (These questions will vary in inflection and emphasis depending on whether the narrative is the work of a single artist—as in the case of William Blake's illuminated poems, or the recent work of Alison Bechdel or Chris Ware—or the product of collaboration, as is the norm in the world of "comics" narrowly defined.)

The graphic narrative is, then, a device with many moving parts. Randall Jarrell once defined the novel as "a prose narrative of some length that has something wrong with it"—not so far from the implicit definition of my students—and a graphic narrative might be even more naturally inclined to error. And everything I have said so far applies to fictional narratives: if the tale graphically told is historical or biographical, as is increasingly common these days, then one must also ask whether it is faithful to what we know, from elsewhere, of the story it tells. Yet another way for a book to have something wrong with it.

All of this throat-clearing brings us back to Dotter of Her Father's Eyes. It is a double story, whose protagonists are Lucia Joyce, daughter of James Joyce, and Mary M. Talbot herself, in her early years as Mary Atherton, daughter of James S. Atherton, whose The Books at the Wake: A Study of Literary Allusions in James Joyce's Finnegans Wake (1959) was one of the first major studies of that most daunting of masterpieces. (Dotter of Her Father's Eyes takes its title from a phrase in the Wake.)

The first thing that must be said about Dotter is that it's one of the most visually rich and sophisticated graphic narratives I have ever seen. Bryan Talbot renders the scenes from Mary Atherton's childhood in sepia tones, though patches of bright red or green are used occasionally to heighten certain moments; the life of the Joyce family is rendered in muted and mostly dark blues; and Mary's emergence into adulthood from the oppressive authority of her father is signaled by the use of fully-colored panels. Typewriter-style typefaces appear in conjunction with, often in contrast to, the familiar style of comic lettering; and scattered through the book are photographs, chiefly of documents pertaining to James Atherton. A particularly interesting example comes on the last page of the narrative: a weathered card on which is typed the chorus of the old ballad "Finnegan's Wake" lies atop Atherton's University of Liverpool registration form, which in turn covers much of the last page of Finnegans Wake, which begins: "sad and weary I go back to you, my cold father, my cold mad father, my cold mad feary father." Layers upon layers, both literally and metaphorically.

The James Atherton presented here was never mad, but he was often angry: he is most present in his outbursts, verbally and sometimes physically violent, and otherwise in the determination with which he cut himself off from his family in order to work without interruption. It is clear that Mary Talbot found her father "feary" indeed, and her difficulties with him, and her pleasure in the rare moments of his kindness, make up her whole account: her mother appears here only as a kind vagueness. In the parallel story, James Joyce is never angry but is often distant: he seems puzzled by his daughter on the rare occasions when he drifts into her life, typically to adjudicate hostilities between Lucia and her mother Nora. Nora is the story's chief villain, constantly mocking and belittling her daughter, while the great writer is comparatively kind and gentle—but utterly unsupportive of Lucia's love for dance: "Lucia, Lucia. Be content. It's enough if a woman can write a letter and carry an umbrella gracefully."

This is a plausible portrait of Joyce, who seems to have married Nora Barnacle at least in part because of her ordinariness, her lack of interest in his own intellectual pursuits, and who was not above making fun, in Ulysses, of Molly Bloom's mental shortcomings. ("She had interrogated constantly at varying intervals as to the correct method of writing the capital initial of the name of a city in Canada, Quebec …. In calculating the addenda of bills she frequently had recourse to digital aid.") What is less plausible is the dominant portrait in Dotter: Lucia Joyce as a seemingly normal and healthy young woman who is legitimately frustrated by one relatively minor issue—romantic rejection by her father's secretary, the young Samuel Beckett—and one major one—her parents' refusal to support her calling to be a dancer. Her family's decision to place her in a mental institution seems, then, not only cruel but utterly inexplicable.

Bryan Talbot draws Lucia—it is hard to overstress the importance of this—so that she never looks like a seriously disturbed person; even her anger seems moderate, until the very end, and any extremity of response is presented as fully understandable in light of her family's treatment of her. The text and the imagery of this book are at one in pressing us to believe that Lucia was simply a gifted young woman whose parents, one in hostility and one in indifference, frustrated her career and then, when that angered her, allowed her brother to toss her into a mental institution, where she remained until her death in 1982. This could be a true story but is on the face of it deeply unlikely, and the book needs to do more to justify its interpretation, since it portrays the whole Joyce family as monstrous.

The historical record that we possess suggests a more complicated and more interesting story. Lucia grew up in chaotic circumstances, with frequent moves to dodge creditors that led the family on a constant odyssey across Europe and through different social, economic, and linguistic environments. Precisely how this affected her, and what vulnerabilities were part of her makeup from birth, we simply don't know, but her behavior seems always to have been odd. As a child she was prone to long periods of staring off into space, and as a young adult was mercurial at best: jumping impulsively from one style of dance to another and from school to school to school, repeatedly snipping the telephone lines when she felt her father was getting too many calls and therefore too much attention, and, finally, throwing a chair at her mother—the event that precipitated her brother Giorgio's decision to institutionalize her.

For all his indifference to Lucia's love of dancing, for which he was surely culpable, Joyce never thought that she was anything other than an extraordinary person: "Whatever spark or gift I possess has been transmitted to Lucia and it has kindled a fire in her brain." He knew that she was troubled, but refused to believe that she was mentally ill—though once, when he heard that she had attended Mass, he exclaimed, "Now I know she is mad." Given his own calling, he was especially sensitive to what he discerned as a peculiar linguistic power in her: "She is a fantastic being, speaking a curious abbreviated language of her own," he wrote to his patron and publisher Harriet Weaver. "I understand it or most of it." To another correspondent he wrote, "Lucia has no trust in anyone except me, and she thinks nobody else understands a word of what she says." And he even trusted her own self-understanding, as he told Weaver: "Maybe I am an idiot but I attach the greatest importance to what Lucia says when she is talking about herself. Her intuitions are amazing."

Carol Loeb Shloss, in her 2003 biography of Lucia, portrays Joyce as effectively a parasite, sucking the linguistic life out of Lucia and claiming it as his own in Finnegans Wake. (Shloss sees even Lucia's dancing—visitors to the Joyce household noted that she would practice in the same room where her father was writing—as providing rhythmical inspiration for his intricate and fanciful book.) This account has been called into serious question and makes Joyce scarcely less monstrous than he would be if he had allowed his daughter to be institutionalized for no reason stronger than a temper tantrum. But as an explanation it draws clearly on what we know, in that it shows a father deeply involved in his daughter's life and acknowledges that Lucia was anything but the cheerfully normal person we see in Dotter of Her Father's Eyes.

It's hard not to feel that the Talbots' portrayal of the Joyce family is shaped to bring it closer to the life of the Atherton family. James Joyce appears here as a distant, bemused half-presence—a little like James Atherton minus the terrible temper—but in real life was immensely and irresistibly charming to family and friends alike, though wildly erratic. One cannot doubt that his work on Finnegans Wake led him to neglect his family, and that Lucia resented this; but when he was present to her, his love and concern were evident, and he tirelessly sought to get her the best possible treatment. One of his friends estimated that in the last few years of Joyce's life three-quarters of his income went to her care, and he wrote detailed accounts of her condition for her therapists and doctors. He seems even to have thought of the Wake as a kind of counterspell to undo Lucia's madness, if madness it was: patting the manuscript of the work in progress, he once said, "Sometimes I tell myself that when I leave this dark night, she too will be cured."

In the end, Dotter of Her Father's Eyes tells with extraordinary visual sophistication a tale that, structurally and verbally, doesn't quite hold together. That Mary Talbot's father was a Joycean; that he was a difficult and even abusive man; that he sometimes used Joycean language when speaking to her (borrowing a phrase from A Portrait of the Artist as a Young Man, he called her "baby tuckoo" when she was small); that she too studied dance for a while—these correspondences, while they clearly created in Talbot's mind a strong link with Lucia Joyce, do not seem to me strong enough to make the parallel tales meaningfully parallel. It's a highly promising experiment in the visual presentation of intertwined life stories, and as such may bear rich fruit in the future; but its simplification of the immensely strange and convoluted relationship betwen James Joyce and his gifted but wounded daughter is unfortunate.

In 1936, after Lucia had begun her long circuit of moving from hospital to hospital, James Joyce panicked at the thought of what might happen to his daughter if the coming war were to separate them. He wrote to friends to ask for their help—any kind of help: "If you were where she is and felt as she must, you would perhaps feel some hope if you felt that you were neither abandoned not forgotten." (One word echoes repeatedly through his late letters about Lucia: "abandoned.") On the penultimate page of Finnegans Wake, a few lines before the passage about "my cold mad feary father," there are lines that some have read as words of hope for poor lost Lucia: "How glad you'll be I waked you! How well you'll feel! For ever after." But Lucia herself, in 1941, when she was told that her father had just died, replied, "That imbecile. What is he doing under the earth? When will he decide to leave? He's watching you all the time."

Alan Jacobs is professor of English at Wheaton College. His edition of Auden's For the Time Being is just out from Princeton University Press. He is the author most recently of The Pleasures of Reading in an Age of Distraction (Oxford Univ. Press) and a brief sequel to that book, published as a Kindle Single: Reverting to Type: A Reader's Story.

Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromAlan Jacobs

Philip Jenkins

A neglected aspect of the “other Inkling.”

Page 1476 – Christianity Today (10)

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

Can you imagine suddenly discovering a trove of major new works by one of the greatest Christian authors of the last century, a worthy companion of C. S Lewis and T. S. Eliot? In a sense, we actually can do this, and we don’t even need to go excavating for manuscripts lost in an attic or mis-catalogued in a university archive. The author in question is Charles Williams (1886-1945), well-known to many readers as an integral member of Oxford’s Inklings group, and a writer venerated by Lewis himself. (Tolkien was more dubious.) T. S. Eliot offered high praise to both the work and the man. Among other admirers, W. H. Auden saw Williams as a modern-day Anglican saint, to whom he gave much of the credit for his own conversion, while Rowan Williams has termed that earlier Williams “a deeply serious critic, a poet unafraid of major risks, and a theologian of rare creativity.” Some thoroughly secular critics have joined the chorus as well.

Williams exercised his influence through his seven great novels, his criticism, and his overtly theological writings—although theology to some degree informed everything he ever wrote. Some, including myself, care passionately about his poetry (I said “care about,” not “understand”). Amazingly, though, given his enduring reputation, Williams’ plays remain all but unknown and uncited, even by those who cherish his other work. Now, these plays are not “lost” in any Dead Sea Scroll sense: as recently as 2006, Regent College Publishing reissued his Collected Plays. But I have still heard erudite scholars who themselves advocate a Williams revival ask, seriously, “He wrote plays?” Indeed he did, and they amply repay reading, for their spiritual content as much as for their innovative dramatic qualities. Two at least—Thomas Cranmer of Canterbury and The House of the Octopus—demand recognition as modern Christian classics, and others are plausible candidates.

As a dramatist, Williams was a late bloomer. Although he was writing plays from his thirties, most were forgettable ephemera, and his most ambitious work suffered from his desire to reproduce Jacobean styles. In 1936, though, as Williams turned fifty, his play Thomas Cranmer of Canterbury was produced at the Canterbury Festival. This setting might have daunted a lesser artist, as the previous year’s main piece was Eliot’s Murder in the Cathedral, which raised astronomically high expectations. Thomas Cranmer, though, did not disappoint. Cranmer was after all a fascinating and complex figure, the guiding force in the Tudor Reformation of the English church and a founding father of Anglicanism. Yet when the Catholic Queen Mary came to the throne in 1553, Cranmer repeatedly showed himself willing to compromise with the new order. He signed multiple denials of Protestant doctrine before reasserting his principles, recanting the recantations, on the very day of his martyrdom. Famously, he thrust his hand into the fire moments before he was executed, condemning the instrument by which he had betrayed his beliefs.

Williams’ play is a superb retelling of the history of the English Reformation, but most of the interest focuses on Cranmer himself. Williams studies the journey of a soul en route to salvation despite every effort it can make to resist that outcome—what he calls “the hounding of a man into salvation.” This powerfully reflects the belief in the working of Grace, of the Holy Spirit, that is such a keystone of Williams’ theological framework.

We follow Cranmer along his way through the acerbic commentary of the Skeleton, Figura Rerum, one of the mysterious characters Williams repeatedly used to reveal the inner spiritual aspects of the drama. Although they appear on stage, they normally remain unseen by most or all of the human characters. But the Skeleton is much more than a chorus or commentary: rather, he represents both God’s plan and Cranmer’s destiny, “the delator of all things to their truth.” He is also a Christ-figure, who speaks in mordant and troubling adaptations of Jesus’ words from the Gospel of John: “You believe in God; believe also in me; I am the Judas who betrays men to God.” He is “Christ’s back,” and anything but a Comforter. The Skeleton, moreover, is given some of Williams’ finest poetry, lines that stir a vague recognition until you realize the intimate parallels to Eliot’s yet-unwritten Four Quartets.

Despite Cranmer’s timid and bookish nature, he is led to a courage that will mean both martyrdom and salvation, and will moreover advance God’s purpose in history. Ultimately, having lost everything and all hope, he throws himself on God’s will (in one of Williams’ many echoes of Kierkegaard). “Where is my God?” asks a despairing Cranmer. The Skeleton replies,

Where is your God?
When you have lost him at last you shall come into God.

When time and space withdraw, there is nothing left
But yourself and I; lose yourself, there is only I.

But even at this moment of total surrender, the play offers no easy solutions, and no simple hagiography. In the last moments, with death imminent, Cranmer even agrees to the Skeleton’s comment that “If the Pope had bid you live, you should have served him.” If he is to be a martyr, that decision is wholly in God’s hands: “Heaven is gracious / but few can draw safe deductions on its method.”

The success of Thomas Cranmer marked a shift in Williams’ interests to drama. Over the next nine years, up to his death in 1945, he would publish only two novels, as against eight other dramas that, together with Cranmer, would make up his Collected Plays. Like his friend Christopher Fry and other English dramatists of the age, Williams sought to revive older forms, including mystery plays and pageants, and some of these works are among his most accessible. Seed of Adam and The House by the Stable are Nativity plays, but as far removed from any standard church productions as we might expect given the author. In Seed, Adam also becomes Augustus, and the Three Kings represent different temptations to which fallen humanity has succumbed. In the pageant Judgement at Chelmsford, episodes from the span of Christian history provide a context for one very new and thoroughly modern diocese largely composed of suburban and industrial regions, and already (in 1939) facing the prospect of destruction by bombing. Yet Williams unites ancient and modern, placing Chelmsford firmly in the Christian story alongside Jerusalem and Antioch: all times are one before the Cross.

But if all the plays are worth rediscovering, it is his very last—The House of the Octopus (1945), a theologically daring story of an encounter with absolute evil—that best makes the case for his stature as a first-class Christian writer. Remarkably too, this play gains enormously in hindsight because of its exploration of ideas that seemed marginal to Christian thought at the time, but which have become pressing in an age of global church expansion.

The House of the Octopus offers a highly developed statement of Williams’ elaborate theological system, which we can trace especially through the earlier novels. His key beliefs involved what he termed substitution and exchange, in a sense that went well beyond the customary interpretation of Christ’s atonement. For Williams, human lives are so intertwined that one person can and must bear the burdens of others. We must, he thought, share mystically in one another’s lives in a way that reflects the different persons of the Trinity: they participate in what Williams called Co-inherence. Moreover, this mutual sharing and participation extends across Time—to which God is not subject—and after death. In his novel Descent Into Hell (1937), a woman agrees to bear the sufferings and terrors of a 16th-century ancestor as he faced martyrdom in the Protestant cause; he in turn perceives that loving aid as the voice of a divine messenger—and he might well be right in his understanding.

Stricter Protestants found Williams’ vision of the overlapping worlds of living and dead unacceptably Catholic, if not medieval, and accused him of heresy. Wasn’t he teaching a doctrine of Purgatory? Williams was perhaps taking to extremes the Catholic/Anglican doctrine of the communion of saints, but he was guided above all by one scriptural principle, expounded in Romans 8: the denial that anything in time and space can separate us from God’s love.

If some of Williams’ visionary ideas fitted poorly in the England of his day, they could still resonate in newer churches not grounded in Western traditions. House of the Octopus, for example, used a non-European setting to suggest how familiar dogmas might be reimagined in other cultures. The play is set on a Pacific island during an invasion by the Satanic empire of P’o-l’u. Although the situation strongly recalls the Japanese invasion of Western-ruled territories in World War II, and the resulting mass slaughter of Christian missionaries, Williams never intended to identify P’o-l’u with any earthly state. This is a spiritual drama, and the leading character is Lingua Coeli, “Heaven’s Tongue,” or the Flame, a representation of the Holy Spirit, who remains invisible to most of the characters throughout the play.

When alien forces occupy the island, they immediately demand the submission of the native people, who have recently become Christian converts. Terrified, one young woman, Alayu, denies her Christian faith and agrees to serve instead as “the lowest slave of P’ol’u,” but even that apostasy does not save her life. And this is where the theological issue becomes acute. The Western missionary priest, Anthony, is convinced that Alayu’s last-minute denial has damned her eternally. The local people, however, realize that salvation absolutely has to be communal as well as individual:

We in these isles
Live in our people—no man’s life his own—
From birth and initiation. When our salvation
Came to us, it showed us no new mode—
Sir, dare you say so—of living to ourselves.
The Church is not many but the life of many
In ways of relation.

Wiser than Fr. Anthony, they also know that death itself is a permeable barrier, and so is the seemingly rigid structure of Time itself. As a native deacon asks, could not Alayu’s original baptism have swallowed up her later sin?

If God is outside Time, is it so certain
That we know which moments of time count with him,
And how?

Alayu is saved after her death, through the support of her people and the direct intervention of the Flame. Formerly an apostate, the dead Alayu becomes a saint interceding for the living. As the native believers tell the horrified missionary, “Her blood has mothered us in the Faith, as yours fathered.” When Anthony in turn faces his own torment and martyrdom—and the danger of apostasy—it is Alayu who will give him strength: “He will die your death and you fear his fright.” Fr. Anthony learns that the Spirit’s power is far larger than he has ever dared believe. And he also realizes how deceived he was to think he could have kept his status as paternalistic ruler of his native church indefinitely, among believers who had at least as much direct access to the Spirit as he did himself.

Although Williams was claiming no special knowledge of newer churches and missions, recent developments have given his work a strongly contemporary feel. The ideas he was exploring in 1945 have become influential in those rising churches, especially the emphasis on the power of ancestors and the utterly communal nature of belief. In such settings, the ancient doctrine of the communion of saints, the chain binding living and dead, acquires a whole new relevance, and a new set of challenges for churches that thought these issues settled long since.

Like his other writings, Charles Williams’ plays offer plenty to debate and to argue with—but his ideas are not lightly dismissed. Some of us have been wrestling with them for the better part of a lifetime.

Philip Jenkins is Distinguished Professor of History at Baylor University’s Institute for Studies of Religion. He is the author most recently of Laying Down the Sword: Why We Can’t Ignore the Bible’s Violent Verses (HarperOne).

Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromPhilip Jenkins

Naomi Schaefer Riley

Embedded reporting from the Millennial front.

Page 1476 – Christianity Today (12)

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

In The World Until Yesterday, Jared Diamond notes that some traditional societies let small children play with and even suck on sharp knives. Diamond is not saying we should “emulate all child-rearing practices of hunter-gatherers.” (That’s good to know.) But maybe kids would learn some valuable lessons if we gave them a little more responsibility.

Page 1476 – Christianity Today (14)

Twentysomething: Why Do Young Adults Seem Stuck?

Robin Marantz Henig (Author), Samantha Henig (Author)

304 pages

$10.33

Which raises an interesting question: At what age should kids be allowed to use sharp knives? My six-year-old was trying to slice ravioli with a butter knife the other day and nearly gave me a heart attack. Do kids demonstrate that they’re old enough to do something and then we let them do it, or are they simply old enough because they’re doing it? Maybe age, like gender, is now just a social construct. The idea that there is a right age to use sharp knives or walk yourself to the bus stop or (looking to the future here) get married or have kids or start a career or move out of your parents’ basement is … so 20th century.

That, anyway, is what I began to think after reading a couple of recent books from the crop of treatises purporting to explain Generation Y—people born between 1978 and 2000. In Twenty Something—Why Do Young Adults Seem Stuck?, Robin Marantz Henig and her daughter Samantha Henig, both journalists, offer a pop psychology tour of the scientific literature about so-called “emerging adults.” They propose to compare millennials with their boomer counterparts on a variety of subjects to determine (at the end of each chapter) where “now is new” and where the behavior of the current crop of young adults is the “same as it ever was.”

So, for instance, in a chapter about the way twentysomethings treat their brains and bodies, the authors conclude that “people still smoke too much and drink too much.” Samantha Henig notes that when she got her first cavity at age 24, she realized that she had to start to worry about her body’s “decay.” Conclusion? “When young people are responsible for their own health, good habits go to hell.” This is not exactly profound stuff. And while it may seem useful to compare boomers and millennials because boomer parents are often the ones wondering where they went wrong, we may also want to ask whether a comparison with the boomers is setting the bar a little low.

The Henig women mostly rely on studies by various psychologists, but they also put together their own survey, which they sent to friends and which was answered by 127 people. They don’t bank on the results for any broad claims, but their anecdotes regularly draw from this survey. It becomes quickly clear that the Henigs’ friends are a lot like them. In a chapter about marriage, they quote Michael, a 38-year-old engineer, who proposed to his girlfriend, “a graduate student at NYU who was doing her doctoral research on gender norms in courtship.” In a chapter on career choices, the Henigs quote a 32-year-old woman who, prior to pursuing a career in architecture, tried out a variety of other jobs, including “small writing gigs, short-term consultancy, researching for professors, nannying.” In other words, a reader would be forgiving for concluding that Generation Y are all college graduates from wealthy families who can’t quite settle on the perfect mate.

To her credit, Hannah Seligson looks a little further afield for the millennials she profiles in Mission Adulthood. She includes a leader of college Republicans who grew up in the Mormon church, a veteran of the Iraq war who is also a single mother, and the gay son of Mexican immigrants whose attention-deficit problems are making it difficult for him to hold down a job and pay off his college loans. Though she acknowledges that everyone she picked has a college degree, there is still plenty of diversity here. Her “guiding question,” she says, is “could we have met them a generation ago?”

But Mission Adulthood is still ultimately a defense of this generation against people who find them “lazy” or “stunted” or “entitled.” Seligson concludes that these critics are the “victims of prejudice. They dislike and disdain what they see because they do not understand it.” Maybe. But Seligson’s case is not helped when we hear her subjects say “What people in the past might have gotten from church, I get from the Internet and Facebook. That is our religion.” Or when Seligson describes “startup depression,” the anxiety that comes AFTER you’ve succeeded in getting tens or hundreds of thousands of dollars in venture capital for your wacky new business idea.

Most people think that millennials looks different because they do everything later. They get married later and have kids later and settle on a career later and move out later. Why? Well, primarily because they can. Thanks to our longer life-span and modern technology, they have a lot more time to examine their choices. Both of these books go on at length about the paradox of choice and the related phenomenon of decision fatigue. Twenty-somethings are faced with too many options and exhausted by having to pick among them.

But they don’t want to close anything off. The Henigs cite a fascinating study showing that this generation wants primarily to keep its options open. In a computer game devised by MIT psychologists, young adult players are given a certain number of “clicks” which they can use to “open doors” or—once inside a room—to get a small amount of money. After a few minutes of wandering, the players figure out which rooms have the most money. Theoretically they should simply keep clicking in those rooms. But sometimes, doors will start closing. Even players who know they will earn more from using their clicks inside a room will start to panic and click to keep the doors from closing. (The metaphor kind of hits you over the head, though, annoyingly, either the researchers didn’t try this on older people to compare or the authors of the book failed to report it.)

So twentysomethings like to keep their options open. Oddly, in fact, they seem to be examining them earlier than previous generations (at least in recent memory). In the West, anyway, our helicopter parenting means that kids are thinking about what college they will go to when they’re in elementary school. They are told from toddlerhood that they can be whatever they want when they grow up, and by the time they reach college they are paralyzed by the choice. The Henigs cite one young woman, a budding art historian who had not taken a science course in years suddenly agonizing about whether to take a college course called “Spanish for Doctors.” “What if I want to become a doctor? Shouldn’t I keep that option open?”

Millennials also starts engaging in sexual activity younger, which means that to the extent that they date, they will have 15 years of relationships with the opposite sex before they even think about marriage. Again, the options seem limitless. And finally, thanks to our early (and perhaps over-) diagnosis of psychological ills, kids start taking drugs like Adderall and Ritalin earlier and earlier. (The Henigs argue that prescription drugs are the new LSD. As of 2005, a quarter of a million college students were abusing prescription drugs.)

So what is the right age to get married and have children and buy a house and get a steady job and become financially independent? It may be hard to offer young adults a specific answer. But it is also possible to say that putting off decisions does not mean better results. The Henigs describe, for instance, the “slide” into marriage that happens when couples living together just decide it’s easier to simply tie the knot rather than beginning the process of breaking up, dividing the stuff, etc. Later, they may slide into divorce.

Barry Cooper, a British author, recently warned in Christianity Today against worshipping “the god of open options.” This god, Cooper said, “is a liar. He promises you that by keeping your options open, you can have everything and everyone. But in the end, you get nothing and no one.” Good advice at any age.

Naomi Schaefer Riley is the author most recently of ‘Til Faith Do Us Part: How Interfaith Marriage is Transforming America, just published by Oxford University Press.

Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromNaomi Schaefer Riley

Allen C. Guelzo

Slavery and the Constitution.

Page 1476 – Christianity Today (15)

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

“On the Fourth of March, 1861, Abraham Lincoln took the oath of office as the six-teenth president from Chief Justice Roger Brooke Taney—and managed, at the same time, to box the chief justice on the judicial ear. Or, at least, to draw a bright line of constitutional understanding between himself and the author of Dred Scott v. Sanford. “There is some difference of opinion,” Lincoln announced, about whether the Constitution’s fugitive slave clause “should be enforced by national or by state authority.” This distinction might be immaterial to the fugitive, but if Congress was to pass laws on the subject, shouldn’t “all the safeguards of liberty known in civilized and humane jurisprudence to be introduced, so that a free man be not, in any case, surrendered as a slave?” And just to make sure that no one assumed that he was merely calling for more accurate identification of suspects, Lincoln asked whether any such legislation should also explicitly “provide by law for the enforcement of that clause in the Constitution which guaranties that ‘The citizens of each State shall be entitled to all privileges and immunities of citizens in the several States?’ “

To the naked eye, there seems nothing particularly momentous in that question. But there was. The “free man” who should not be mistaken and “surrendered as a slave” could only be a free black man, otherwise it would have been impossible to mistake him for a slave in the first place. And Lincoln was here suggesting that congressional legislation should protect that free black man because, under the Constitution, “citizens of each State” are entitled to the procedural protections of the Constitution’s privileges and immunities clause. Citizens. Only four years before, the chief justice sitting behind Lincoln had pronounced in Dred Scott v. Sanford that the Constitution did not and could not recognize black people as citizens, whether they were free or slave. Now, on almost the anniversary of Dred Scott, Lincoln threw Taney’s own words back at him.

But he did more. Everything which was, at that moment, dividing the republic and threatening to tip it into civil war was, Lincoln said, strictly an argument about constitutional theories, not about the things the Constitution actually said. “Shall fugitives from labor be surrendered by national or by State authority?” Lincoln asked. “The Constitution does not expressly say. May Congress prohibit slavery in the territories? The Constitution does not expressly say. Must Congress protect slavery in the territories? The Constitution does not expressly say.” What reasonable American would want to smash the Union when the grounds of disagreement hung on theories? No one, presumably—unless of course the chief justice had, four years before, proclaimed that the Constitution did say, expressly, that Congress could not prohibit slavery in the territories, and that Congress really is obliged to protect it there because the Constitution “distinctly and expressly” affirms the “right of property in a slave.” If the Constitution recognizes the “right of property in a slave,” then that property has no rights of its own, and the owners of that property have every ground on which to demand its protection and sustenance by the federal government.

But what Taney announced as fact, Lincoln relegated to opinion. In 1858, during his celebrated debates with Stephen A. Douglas, Lincoln flatly declared that “the right of property in a slave is not distinctly and expressly affirmed in the Constitution.” Now, Lincoln was president, and the Constitution he had sworn to preserve, protect, and defend would be understood by him to offer no national recognition to slavery at all. From that seed, you might say, the Civil War sprang.

That Lincoln revered the Constitution is not really to say anything different from what almost every other American of his generation would have said about it. “No slight occasion should tempt us to touch it,” Lincoln warned in 1848. “Better, rather, habituate ourselves to think of it, as unalterable. It can scarcely be made better than it is …. The men who made it, have done their work, and have passed away. Who shall improve, on what they did?” Only the most radical of abolitionists were inclined to regard it, in William Lloyd Garrison’s kindling terms, as “an infamous bargain … a covenant with death and agreement with hell” because it seemed to offer shelter to chattel slavery. But by the 1880s, there were many more voices of question about the untouchable perfection of the Constitution, and unlike Garrison, they were expressing eerie parallels to voices in that same decade which were beginning to question the untouchable perfection of the Bible. “The Constitution of the United States had been made under the dominion of the Newtonian Theory,” wrote Woodrow Wilson, whose PhD dissertation in 1883 on Congressional Government: A Study in American Politics frankly questioned the wisdom of a government of separated powers. “The trouble with the theory is that government is not a machine, but a living thing. It falls, not under the theory of the universe, but under the theory of organic life. It is accountable to Darwin, not to Newton.”[1] The 18th century had no sense of historical progression, development, and evolution, Wilson objected; it believed that certain fixed truths were available to be discovered, whether in physics or in government. Wilson’s century thought it knew better, and understood that the intricately balanced mechanisms of the U.S. Constitution were like one of David Rittenhouse’s orreries, and needed to be superseded by something more efficient, supple and responsive to changes in the national environment.

Wilson didn’t get much of what he wanted (thanks in large measure to those unresponsive congressional mechanisms), but the Progressives who followed Wilson were undissuaded by his failures, and they added a new sting to the Progressive impatience with the Constitution by holding up its embrace of slavery as the prime exhibit of the Constitution’s embarrassing backwardness. This same complaint was repeated very recently by Louis Michael Seidman asking (in The New York Times) why we should continue to be guided by a document written by “a group of white propertied men who have been dead for two centuries, knew nothing of our present situation, acted illegally under existing law and thought it was fine to own slaves.” And it is repeated again in two extraordinary and thorough pieces of constitutional history by David Waldstreicher and George William Van Cleve, both assuring us with no uncertain voice that the Constitution was not only designed to accommodate slavery, but “simultaneously evades, legalizes and calibrates slavery.” If you could desire a telling historical reason to (as Seidman’s New York Times op-ed urged) “give up on the Constitution,” Waldstreicher and Van Cleve offer it as luxuriantly dressed as you could wish.[2]

The Garrisonians were the first to assault the Constitution as a pro-slavery document. “There should be one united shout of ‘No Union with Slaveholders, religiously or politically!‘ ” declared Garrison in 1855, and one particularly good sampling of that disparagement comes from the pen of Frederick Douglass in 1849. Reacting to the insistence of Gerrit Smith and the Liberty Party that the Constitution “is not a pro-slavery document,” Douglass replied that it certainly was, and that it “was made in view of the existence of slavery, and in a manner well calculated to aid and strengthen that heaven-daring crime.” The proof was in the text of the Constitution itself:

• The Three-fifths Clause (Art. 1, sec. 2) gave the slave states disproportionate power in the House of Representatives.

• The authorization extended to Congress “to suppress insurrections” (Art 5, sec. 8) had no other purpose than suppressing slave insurrections, as did the added pledge (in Art. 4, sec. 4) to protect the states “against Domestic violence.”

• The permission given to Congress to end the slave trade after twenty years (Art. 1, sec. 9) was a “full, complete and broad sanction of the slave trade.”

• The clause requiring the rendition of any “person held to service or labor in one State, escaping into another,” labelled escape from slavery a federal crime (Art 4, sec. 2).

This made the Constitution “radically and essentially pro-slavery, in fact as well as in its tendency.”[3]

In more recent times, these arguments were taken up by Leon Higginbotham, Sanford Levinson, Thurgood Marshall, and Mark Graber, mostly as a way of substantiating their larger-view annoyance with the Constitution’s intractability to progressive policy changes. But in no place was the “pro-slavery Constitution” accusation laid down in more fiery detail than by Paul Finkelman, in his provocative Slavery and the Founders (1996; 2nd ed., 2001), where Finkelman not only embraced Douglass’ bill of indictment but added a few more of his own. It had been part-and-parcel of the New Social History in the 1970s and ’80s that slavery and race were the original sin of the American experiment, and that their presence belied any exceptionalist claims that the American founding represented a triumph for human liberty, undimmed by human tears. And in the long view, that was Finkelman’s point, too: Slavery and the Founders was written with the “belief that slavery was a central issue of the American founding,” and in no way creditable to that founding. Not only were the Three-fifths Clause, fugitive rendition, and the suppression of insurrections proof of the pro-slavery intentions of the Founders, slavery enjoyed special protection from the Constitution’s ban on export taxes (which gave a green light to the international marketing of slave-grown products), the dependence of direct taxation and the Electoral College on the Three-fifths Clause, and the limitation of civil suits and privileges-and-immunities to “citizens” (which could only be white people). “A careful reading of the Constitution reveals that the Garrisonians were correct: the national compact did favor slavery,” concluded Finkelman. “No one who attended the Philadelphia Convention could have believed that slavery was ‘temporary.’ “

Finkelman lays the groundwork for both Waldstreicher and Van Cleve (Finkelman is cited more often in A Slaveholders’ Union than any other modern historian), who in turn raise Finkelman’s claims for a pro-slavery Constitution to yet higher degrees. Waldstreicher is the shorter of the two, and more in the nature of a general summation of the neo-Garrisonian viewpoint. Like Finkelman, Waldstreicher believes that the Founders created a national “compact” which consciously sustained slavery (six out of the Constitution’s 84 clauses, he notes, bear on aspects of slavery), and allowed slavery’s interests to prevail in the federal Congress (since the house most responsible for fiscal matters was the place where the Three-fifths Clause brought its greatest weight to bear). But more than Finkelman, Waldstreicher does not believe that this was merely the result of paradox or political log-rolling in the Constitutional Convention. The Revolution itself was caused by the panic slaveholders felt over the implications of the 1772 Somerset decision in the Court of King’s Bench, which rendered slavery a legal impossibility in England. By denying slavery legal standing anywhere in the empire outside the colonies, Somerset alarmed American slaveholders, who were thus rendered instant converts to a revolution against imperial authority. In turn, the Constitution went out of its way to reassure American slaveholders, since the Constitution actually made it harder to get rid of slavery than before.

Van Cleve is less polemical, but longer and more methodical than Waldstreicher. In his reading, both the Revolution and the Constitution acted to strengthen slavery, either by sanctioning the colonial status quo on slave labor or by providing new protections for its expansion. Like Waldstreicher, Van Cleve believes that Somerset profoundly frightened American slaveholders—20 percent of all American wealth, Van Cleve adds, was invested in slaves—and the Constitutional Convention went out of its way to secure slavery’s place in American life. Not only did the Three-fifths clause and the fugitive rendition provisions side entirely with pro-slavery forces, but the state delegations to the convention were given no instructions to seek an end to slavery, and none of the ratification debates (including the Federalist Papers) made slavery an issue. Southerners who took up ratification as their cause in the Southern ratifying conventions actually campaigned for ratification precisely “because the Constitution did not authorize the federal government to take action against it.” Nor does Van Cleve find it difficult to find Southerners quite candid in their belief that “without security for their slave property … the union never would have been completed.” In that light, Chief Justice Taney’s dictum that the Constitution explicitly recognized slaves as property was merely the final corroboration of the Constitution’s lethal pro-slavery tilt.

Yet, in all of these assertions, from Douglass to Waldstreicher and Van Cleve, there creeps in an air of special pleading, an Eyore-ish determination to read the Constitutional glass as perpetually half-full, if not empty. Van Cleve, for instance, always takes the slaveholders’ word as the statement of the Constitution’s sober fact, while anti-slavery observers are dismissed as wrong when they see slavery being diminished by the Constitution. And the notion that the Constitution’s provisions for the termination of the slave trade can be read as “protecting the interests of slave traders and those of states that wanted to import slaves” must crinkle the brow of any disinterested reader. Above all, this pleading has to engender the puzzled question of how a regime based on such a pro-slavery Constitution could, within the span of a single lifetime, bring to the east front of the Capitol a president who could deny that the Constitution gave slavery any sanction at all.

Waldstreicher offers an explanatory hint in Slavery’s Constitution by suggesting that anti-slavery forces simply abandoned the Constitution and appealed instead to a “higher law,” in the form of a natural-law right to liberty. “Antislavery survived the post-Revolutionary backlash epitomized by the Constitution because some Americans refused to believe that the Constitution, or even America, was the ultimate source of their cherished ideals.” What gets lost in Waldstreicher’s description of the “higher law” appeal was how much it was based on the contention that the Constitution embodied natural law. That made the Constitution susceptible only of a reading which (like Somerset) made freedom the default position of national law, and limited the legalization of slavery only to local or state law. James Oakes, in his marvelous new history of emancipation, Freedom National: The Destruction of Slavery in the United States, 1861-1865 (2013), reads the Constitutional glass as not just half-full but running over with anti-slavery assumptions: “The delegates at the Constitutional Convention … were certain the system was dying anyway,” based on their reading of natural-law economics and natural-law moral philosophy, and “concluded that the progress of antislavery sentiment was steady and irreversible.” Slavery was deliberately crowded off the national table by the Constitution. Why, after all, asked anti-slavery voices at the time of the Missouri debates in 1820, had the Constitution permitted the Northwest Ordinance to stay in effect, or allowed the banning of the slave trade, if slavery was constitutionally-protected property? Why did the Constitution turn such linguistic somersaults to avoid actually using the word slave? Why did the fugitive slave provisions never specify that it was the federal government’s responsibility to render-up fugitives? And in arguments made by both John Quincy Adams (in his plea for the release of the Amistad rebels) and William Henry Seward, the Constitution is presented as a component of the law of nations, which is itself (according to the guiding lights of international legal theory, Vattel, Grotius, and Wheaton) based on natural law.

As if to confirm the suspicion that Finkelman et al. were arguing for a conclusion rather than making a case, Don Fehrenbacher’s last book, The Slaveholding Republic: An Account of the United Sattes Government’s Relations to Slavery (2001) set out a bristling phalanx of reasons why the Constitution had never been designed as a pro-slavery document. The convention itself, Fehrenbacher contended, was rent by bitter debates over slavery and its status, and the resulting document represented, not a triumph of a slaveholding consensus, but the hardly-won survival of an institution under heavy attack. The members of the convention were, in the end, content to curtail slavery rather than exterminating it, partly because they were a Constitutional Convention charged with keeping the American union together rather than an anti-slavery revival meeting calling sinners to repentance, and partly because they were confident that measures like the prospective ban on the slave trade would hasten the death of slavery on their own. The wonder is that slavery managed to survive as long as it did before the anti-slavery assumptions of the Constitution forced slaveholders into rebellion. The proof, for Fehrenbacher, was in the pudding of secession: the secessionists promptly wrote a new constitution, defining, legalizing, and extending slavery, “in stark contrast to the Constitution of 1787 that had embarrassingly used euphemistic language to mask the existence of the tyrannical institution in a land presumably dedicated to liberty.”

Fehrenbacher’s chief labor was to be to show how, if the Constitution cast that chilly an eye on slavery, southerners managed to defend and extend the instituiton for so long; his answer was politics. Southerners adeptly seized control of the executive branch from the very first, and spun the helm of the federal government hard-over in their favor. It was only in 1860, when they decisively lost that control, that the pretense of a pro-slavery Constitution was abandoned, along with the Union itself. All through the decades between 1790 and 1860, anti-slavery voices kept up a steady drumbeat of resistance to the “pro-slavery Constitution,” over and over again declaring that the Constitution was a natural-law document whose baseline was freedom. Lemuel Shaw’s decision in Commonwealth v. Aves (1836) saw no constitutional right of property in slaves.[4] Even a Southern court, in Rankin v. Lydia (1820), held that “freedom is the natural right of man,” and William Jay justified the revolt of the slaves on the Creole in 1841 on the grounds that, as soon as the Creole cleared Virginia waters, it came under control of the law of the sea, which was in turn a subsection of natural law.[5] And then there was Lincoln, who in his breakthrough speech at the Cooper Institute in February 1860, announced that the Constitution, far from recognizing slavery, actually empowers Congress to vaporize it any moment slavery puts a foot outside the states where it has been legalized. “An inspection of the Constitution will show that the right of property in a slave is not ‘distinctly and expressly affirmed’ in it.”

In the murk of historical interpretation, whether the Constitution was pro-slavery or anti-slavery will depend very much on whether someone is inclined to grant more authority to Lincoln than Douglass, to Finkelman than Fehrenbacher, to Lemuel Shaw than Roger B. Taney. Which means, in turn, that the deciding factor is likely to be buried a priori in whether one can be satisfied that an 18th-century Newtonian document should still be allowed to prevail in a political world which is surrounded by 19th-century evolutionary assumptions about adaptation to changing mores and social conditions. That decision will be aggravated by the current furor over “gridlock” and “obstruction” in the federal government, and whether one branch of government has the privilege of slowing the rest of the government’s reaction-times to an unscientific crawl. If “efficiency” (the demon-god of Wilsonian Progressives) or problem-solving or “responsiveness” is the prime desideratum in government, then the Constitution will surely appear as an outdated recipe for chronic political constipation. Hence, Seidman’s complaint that “Our obsession with the Constitution has saddled us with a dysfunctional political system” and “kept us from debating the merits of divisive issues and inflamed our public discourse. Instead of arguing about what is to be done, we argue about what James Madison might have wanted done 225 years ago.” And the temptation to tack on slavery as proof of the Constitution’s immoveability will probably be irresistible—as Seidman attests. Never mind that this evolutionary times-are-not-now-as-they-were argument is ironically what Missouri Chief Justice William Scott used in denying Dred Scott’s appeal in the original hearing of the case.

The problem with Madison is not that his version of government is 225 years old, or that it is Newtonian or mechanistic. It is that Madison and his fellow delegates in Philadelphia did not care a wet slap for efficiency in government. They wanted liberty, and anything which slowed the pace of governmental decision-making, or which exhausted the power of one branch in argument with another, and made government as safely unresponsive as it could be short of inanition, was by their lights precisely what a republic of liberty should prize (even if that guaranteed a large measure of inanition about slavery). What we want the Constitution to be has always had a peculiar way of determining what we think the Constitution was, and is.

Allen C. Guelzo is the Henry R. Luce Professor of the Civil War Era and a professor of history at Gettysburg College. He is the author most recently of Fateful Lightning: A New History of the Civil War and Reconstruction (Oxford Univ. Press) and Gettysburg: The Last Invasion, just published by Knopf.

1. Wilson, Constitutional Government in the United States (Columbia University Press, 1908), p. 56; Terri Bimes and Stephen Skowronek, “Woodrow Wilson’s Critique of Popular Leadership: Reassessing the Modern-Traditional Divide in Presidential History,” Polity, Vol. 29 (Fall 1996), pp. 27-63.

2. Louis Michael Seidman, “Let’s Give Up on the Constitution,” The New York Times (December 30, 2012).

3. Douglass, “The Constitution and Slavery,” The North Star (March 16, 1849).

4. Shaw, in Commonwealth v. Aves, held that “by the general and now well established law of this Commonwealth, bond slavery cannot exist, because it is contrary to natural right, and repugnant to numerous provisions of the constitution and laws, designed to secure the liberty and personal rights of all persons within its limits and entitled to the protection of the laws.” See Reports of Cases Argued and Determined in the Supreme Judicial Court of Massachusetts, ed. Octavius Pickering (Boston, 1840), p. 219.

5. Robert M. Cover, Justice Accused: Antislavery and the Judicial Process (Yale Univ. Press, 1975), pp. 95-96; Stephen P. Budney, William Jay: Abolitionist and Anticolonialist (Praeger, 2005), pp. 66-67.

Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromAllen C. Guelzo

Laurance Wieder

Messianic unease.

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

Tradition holds that a messiah is born in every generation. “Yes,” the talmudic sages say. “Let the Messiah come, but not in our time.”Since 2001, at least seven books have been published concerning three Jewish messiahs of the past three-and-a-half centuries: Sabbatai Sevi, Abraham Miguel Cardozo, and Jacob Frank. All such studies and historical revisions are written in the shadows or on the shoulders of Gershom Scholem’s landmark 20th-century history, Sabbatai Sevi: The Mystical Messiah. That work of scholarship made respectable again a mystical tradition based largely on the Zohar. Scholem established that Moses de Leon, a late-medieval Spanish Jew, wrote the Zohar. Among learned Jews, that book enjoyed a status equal to the Talmud, a respect undone by the Messiahship of Sabbatai Sevi. Following the English and French Revolutions, modern Jewry sought enlightenment before redemption, citizenship before Jerusalem. Philosophy became preferable to prophecy.

Born in Turkey in 1626, Sabbatai Sevi spent much of his adult life oscillating between a luminous conviction that he was the long-awaited Messiah son of David, and profound inner darkness. Hoping to be cured of delusive sickness, in 1665 Sevi sought out a young healer of souls, the kabbalist-prophet Nathan of Gaza. Alas, instead of release, Nathan proclaimed the manic-depressive rabbi the Redeemer.

That same year, the newly anointed Messiah delivered a public address (reproduced by Ada Rapoport-Albert in Women and the Messianic Heresy of Sabbatai Zevi) on what the Messiah means for the other half of humanity:

As for you wretched women, great is your misery, for on Eve’s account you suffer agonies in childbirth. What is more, you are in bondage to your husbands and can do nothing great or small without their consent …. Give thanks to God, then, that I have come to the world to redeem you from all your sufferings, to liberate you and make you as happy as your husbands, for I have come to annul the sin of Adam.

Jewry flocked to their new hope. Despite his strange deeds and words, despite the opposition of some rabbis who rejected the Messiah, an army of believers followed Sabbatai Sevi’s call to march with him to Constantinople. There, he said, he would receive the earthly crown of empire from the Turkish sultan, and so bring on the new order. In 1666, a year of astrological portent, the Mystical Messiah was arrested outside the Ottoman capital. Called to an audience before the Grand Turk, Sabbatai took the turban.

The Jewish Messiah’s apostasy to Islam was enough to disillusion many of those who thought the promised time had arrived. Not to mention those who always doubted. In November 1666, Rabbi Joseph Halevi sent a letter to Jacob Sasportas of Hamburg. He observed that a full year had passed since dispatches from Alexandria, from Egypt, from the Holy Land, from Syria and from all Asia announced that redemption was at hand. “This good news was brought us by a brainless adolescent from Gaza, Nathan the Lying Prophet,” Halevi wrote, “who, not satisfied with proclaiming himself a prophet, went on to anoint king of Israel a coarse, malignant lunatic whose Jewish name used to be Sabbatai Sevi.”

For those able to weather their disappointment, Nathan of Gaza justified Sabbatai Sevi’s abandonment of faith on kabbalistic grounds. The Torah is now void. Words have no meaning, except what we say they mean. Bad is now good, good evil. The Messiah must sink low in order to mount high. And so forth. Some of the faithful remained faithful.

Six years after Sabbatai Sevi’s death in 1676 in Alkum, Sabbataian devotee Joseph Karillo and a companion called on Abraham Miguel Cardozo in Constantinople. Cardozo was a Catholic convert to Judaism who had accepted the Turkish-born messiah and claimed a messianic role of his own.

The companion recounted for Cardozo their final audience with Sabbatai Sevi. “After the New Year [Rosh Hashanah] …, he took us out to the seashore with him and said to us: ‘Each of you go back home. How long will you adhere to me? Until you see the rock that is on the seashore, perhaps?’ And we had no idea what he was talking about. So we left Alkum, and he died on the Day of Atonement [Yom Kippur], early in the morning.” A Sabbataian sect practices their version of Judaized Islam in Smyrna to this day.

Moshe Idel, a contemporary Israeli scholar of kabbalism, interprets history as a kind of sacred text, written in glyphs or emblems as well as in narrative. This method follows a path trod by the Renaissance philosopher of history Giambattista Vico, by the 20th-century essayist Walter Benjamin, and by Benjamin’s friend Gershom Scholem. They assume that events, like words, conceal meaning. Thus, the fact that Sabbatai Sevi was born on the Ninth of Av, the day both Temples fell, 1626, is a sign, and not only to the orthodox.

As Abraham Cardozo put it: “What save sadness did Sabbatai, who was born on a funeral day, predict? He was unfortunate in his very name, since, in the Hebrew language, Saturn is called Sabbatai, a sad and malignant star.”

Or as Gershom Scholem wrote in a poem from 1933:

In days of old all roads somehow led
To God and to his name.
We are not devout. We remain in the Profane,
And where ‘God’ once stood, [now] Melancholy stands.

Abraham Miguel Cardozo was born in 1630, in Portugal. His family were Marranos, or secret Jews. Raised Catholic and educated at a university in Spain, Cardozo left home after graduation to join his brother in Venice. There, he converted (back) to Judaism. A fervent scholar of Judaica, Cardozo identified himself with the Messiah son of Joseph, a figure who traditionally heralded rather than followed the Messiah from the line of David.

Cardozo accepted the Mystical Messiah, but did not follow Sevi into Islam. Indeed, he neither wanted nor expected Sabbatai to bring the Jews back to the Holy Land. “When the Redeemer comes,” Cardozo wrote, “the Jews will still be living among the Gentiles even after their salvation is accomplished. But they will not be dead men, as they had been previously.” As in the 19th-century dream of Enlightenment, through redemption Jews will experience happiness, and enjoy dignity and honor.

Cardozo’s dissent, like Nathan of Gaza’s, was rooted in the Zohar. But the Sephardic exile’s vision faced forward to William Blake’s irascible God and The Marriage of Heaven and Hell, rather than backward toward Andalusia, Moses de Leon, and the Zoharic circle of Simeon ben Yohai.

David J. Halperin summarizes the Marrano’s minority theology in his edition of Abraham Miguel Cardozo: Selected Writings:

The world hosts four basic religious systems: Absolute, Prophetic Monotheism (Judaism and Islam); Philosophical Deism; Christian Trinitarianism; and pagan polytheism. All four are false religions. Muslims and Jews insist that there is no God except the being philosophers call the First Cause. Yet the message that Moses brought to Israel, when he came to redeem them from Pharaoh, was that there is a God other than the First Cause. He is the God whom the Bible calls by the sacred four-letter Name, whom the ancient rabbis called the Blessed Holy One.

Where Sabbatai Sevi sounds grandiose, Cardozo’s voice is modest and extreme. “I am no Messiah,” he wrote later in life. “But I am the man of whom the Faithful Shepherd [Moses] spoke when he addressed Rabbi Simeon ben Yohai and his companions: ‘Worthy is he who struggles, in the final generation, to know the Shekhinah [the female side of the Divine Presence], to honor Her through the Torah’s commandments, and to endure much distress for Her sake.’ “

Abraham Cardozo outlived his fallen Messiah by thirty years. Addressing Sevi’s failure to reappear, Cardozo explained that “Our ancient rabbis have said that King Messiah will tell every Jew who his father is, that is to say, his Father in heaven, God, whom they have forgotten in their exile. Sabbatai Sevi has not done this. He has not openly proclaimed to the Jewish people the divinity of the Shekhinah, the existence of the Great Name, the truth of God. Even if he was aware of all this, his awareness was for himself alone.”

Jacob Frank was another Messiah successful for himself alone. Born in 1726, this Polish Jew with a knack for commerce found his calling in mid-18th-century Smyrna. By force of personality, Frank assumed his messianic mantle in the Ottoman Sabbataian community. This Messiah’s revealed truth identified four aspects of holiness: the God of Life, of Wealth, of Death, and the God of Gods. Frank lived like an Oriental potentate on the offerings of his followers as he progressed from Turkey through Anatolia to Poland and Bohemia, all the while promising everlasting life on this earth to those numbers of Sephardic and Ashkenazi Jews he converted—to Catholicism.

Frank’s converts assumed Polish names and received aristocratic patents when they followed their redeemer into the Catholic Church. Frank identified his daughter—born Rachel Frank in 1754 and later known as Eva—with the Shekhinah, as well as with the Madonna. The Frankists addressed Eva as “The Maiden” or “The Virgin.”

Frank’s sayings and stories are compiled in a book, The Words of the Lord. There, the master asks, “How could you think that the messiah would be a man? That may by no means be, for the foundation is the Maiden. She will be the true messiah. She will lead all the worlds.”

Pawel Maciejko calls his Frankist history The Mixed Multitude, alluding to both the generation that followed Moses out of Egypt and to the rising tide of spiritual and political democracy. Witnesses withheld their hosannahs. A contemporary rabbi’s account of one early Frankist-cum-Sabbataian ritual in Lanckoronie, Poland, in 1756, reads like a scene from Isaac B. Singer’s novel Satan in Goray: “And they took the wife of a local rabbi (who also belonged to the sect), a woman beautiful but lacking discretion, they undressed her naked and placed the Crown of the Torah on her head, sat her under the canopy like a bride, and danced a dance around her. They celebrated with bread and wine of the condemned, and they pleased their hearts with music like King David … and in dance they fell upon her kissing her, and called her ‘mezuzah,’ as if they were kissing a mezuzah.”

The outside world also took note of Jacob Frank. A 1759 issue of the English Gentleman’s Magazine featured an anonymous “Friendly Address to the Jews.” Its author expressed surprise at a report “that some thousands of Jews in Poland and Hungary had lately sent to the Polish bishop … to inform him of their desire to embrace the Roman Catholic Religion.” The correspondent suggested that if you think that the Christian religion is true, and believe the messiah is already come, then why not “embrace the Protestant religion, that true Christianity which is delivered to us … without the false traditions and wicked intentions and additions of the Popes, who have entirely perverted the truth, and corrupted primitive Christianity.”

Overtly Catholic, the Frankists also kept Jewish feasts and holy days. A few years after the Maiden’s death in 1816, a secret society, called the Asiatic Brethren of Bohemia, Poland, and Hungary, mirrored the Frankists. These Masonic Protestants celebrated Christian holidays as well as the birth and death of Moses, and Shavuot, “to bring about religious unity by leading Christianity back to its Jewish form.”

In his table talk, Frank dismisses Jewish worship and tradition with a wave of his hand: “All the Jews are seeking something of which they have not the slightest inkling. They have a custom of reciting every sabbath: ‘Come, my beloved, to meet the bride,’ calling out ‘Welcome’ to the Maiden. This is all mere talk and song. But we pursue her and try to see her in reality.”

“The whole Zohar is not satisfying for me,” he announced, “and we have no need for the books of kabbalah.”

With regard to his scriptural forebears, Frank models his conduct after an alternative lawgiver: “Moses did not die but went to another religion and God permitted it. The Israelites in the desert did not want to walk that road, and when they came to … bitterness, they became aware of that freedom and it was in that place where there was no obligation.”

What of Frank’s own place in suspended history? “All religion, all laws, and all the books published up to now as well as whoever reads them, are like reflections of words that died a long time ago. All that comes out of Death. The wise man’s eyes should always look to the person in front of him. This man does not look left or right or to the back, yet everybody turns his eyes towards him.”

Just before his own death in 1791, Frank announced: “I tell you, Christ is known to you as coming to liberate the world from Satan’s hands, but I came to liberate you from all laws and statutes that existed up to now. I have to destroy them all, and only then will God reveal himself.”

In The Poetry of Kabbalah: Mystical Verse from the Jewish Tradition, Peter Cole translates a popular hymn by Yisrael Najara that is still part of mainstream Jewish worship. The song, “Your Kingdom’s Glory,” was adopted by Sabbataians as an anthem of messianic kingship. Its seven stanzas were chanted in the Cathedral of Lublin in the presence of Jacob Frank. The hymn begins:

Let your Kingdom’s glory be revealed
over a poor and wandering people,
and reign, Lord who has ruled forever,
before the reign of any King.

Stanza four, the song’s center, states:

I hope for the time of your redemption
and wait with patience for your salvation.
If it tarries, Lord, in your absence,
I will look for no other King.

The plea concludes:

Bring my people back to you There [Sion’s mountain],
and I will rejoice around your altar.
With a new song, I will offer
thanks to you, my Lord and King.

More precise and moving, Cole’s verse-paraphrase of one passage from the Likutei Amarim Tanya of Rabbi Schneur Zalman, an hasidic contemporary of Jacob Frank, embodies the messianic fervor merely alluded to in the earlier, generic hymn:

All before Him is as nothing:
The soul stirs and burns
for the precious glory of His greatness,
to behold the light of the King
like coals of the fierce flame rising.
To be freed from the wick
or the wood to which it clings.

Historically, Christians are vexed with the Jews, who insist on waiting for their own messiah, amid discussion of how he will be known, what marks he shall bear both in the scriptural and in the worldly sense, and when. Islam, too, looks for the Mahdi and a day of salvation. Yet even those who believe that their messiah has appeared await a second coming.

So the question of who and what to accept, of how to recognize the truth, abides. I must ask it of myself, if I ask it of others: How could you believe? Or, How could you not?

Considering the matter of the pretender, or the fallen Messiah, the question changes: How could a person be so false, and yet walk the earth?

Legend tells that on the day the Temple was destroyed, the redeemer was born. At that very moment, a certain Jew was plowing his field, and his heifer lowed. A passing Arab said, “Weep, Jew. Your Temple is destroyed. I know this from your heifer’s moo.”

The heifer lowed again. The Arab said, “Rejoice, for the Messiah, who will deliver Israel, is born.”

The Jew asked the Messiah’s name and birthplace.

The Arab answered, “Menachem (the Comforter) son of Hezekiah, in Bethlehem.”

The Jew sold everything, became a garment merchant, and traveled until he reached Bethlehem. Women flocked to buy his wares, and urged Menachem’s mother to buy a little something from the merchant. She replied, “Better to have Israel’s enemies strangled, than to buy one rag for such a son. The day he was born was the day the Temple was destroyed.”

The Jew who came so far to find her said, “It may have fallen on the day your son was born, but I am certain that on his account the Temple will be rebuilt. Take what you need. I will come again, and you will repay me.”

Time passed. The Jew returned to Bethlehem, and sought out Menachem’s mother. “So tell me, how is your son?”

The woman answered, “Right after you spoke to me, a windstorm snatched him from my hands and carried him off.”

So it is said in the Book of Lamentations: Menachem the Comforter is far from me.

Laurance Wieder is a poet living in Charlottesville, Virginia. His books include The Last Century: Selected Poems (Picador Australia) and Words to God’s Music: A New Book of Psalms (Eerdmans). He can be found regularly at PoemSite (free subscription available from poemsite@gmail.com).

Books discussed in this essay:

Book of Legends/Sefer HaAggadah: Legends from the Talmud and Midrash, by Hayyim Bialik and Y. H. Rawnitzky (Schocken, 1992).

The Poetry of Kabbalah: Mystical Verse from the Jewish Tradition, translated and annotated by Peter Cole, co-edited and with an afterword by Aminadav Dykman (Yale Univ. Press, 2012).

Sabbatai Zevi: Testimonies to a Fallen Messiah, translated, with notes and introductions, by David J. Halperin (The Littman Library of Jewish Civilization, 2012 [2007]).

Abraham Miguel Cardozo: Selected Writings, translated and introduced by David J. Halperin (Paulist Press, 2001).

Saturn’s Jews: On Witches’ Sabbat and Sabbateanism, by Moshe Idel (Continuum, 2011).

Jacob Frank: The End to the Sabbataian Heresy, by Alexander Kraushaar, translated, edited, annotated, and introduced by Herbert Levy (Univ. Press Of America, 2001).

The Mixed Multitude: Jacob Frank and the Frankist Movement, 1755-1816, by Pawel Maciejko (Univ. of Pennsylvania Press, 2011).

Women and the Messianic Heresy of Sabbatai Zevi, 1666-1816, by Ada Rapoport-Albert, translated by Deborah Greniman (The Littman Library of Jewish Civilization, 2011).

Sabbatai Sevi: The Mystical Messiah, 1626-1676, by Gershom Scholem (Princeton Univ. Press, 1973).

Satan in Goray, by Isaac Bashevis Singer (Farrar, Straus and Giroux, 1996 [1955]).

Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromLaurance Wieder

Richard J. Mouw

About that “white grandfatherly God.”

Page 1476 – Christianity Today (18)

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

In a past life, before migrating into the world of graduate theological education, I spent 17 years teaching undergraduate philosophy courses. I gave many lectures in those courses on Plato, focusing primarily on The Republic, Meno, Crito, and Phaedo. Those are dialogues mainly from Plato’s early and middle period, but in graduate school I had also enrolled in a full-year seminar on his later period, working through his Sophist, Laws, and Timaeus line-by-line in the Greek.

None of that makes me an expert on Plato. It does mean, however, that I once had—and continue to have to some degree—a more than superficial grasp of some his main themes. Indeed, I continue to have an intellectual affection of sorts for Plato. He made a profound contribution to Western thought in general, and to Christian thought in particular. To be sure, not all of that was positive. But neither was it all purely negative. When I was teaching undergraduates about how Christian thinkers appropriated themes from Plato, I tried to encourage nuanced assessments of his influence.

Take the much bandied about phrase, “Platonistic dualism,” typically spoken with a disdainful tone. I still think that Plato was pointing us in the right direction when he distinguished between two metaphysical realms, the corporeal and the incorporeal. I even think he had it right when he said that the latter is more “real” than the former. C. S. Lewis employs this very distinction in explaining the difference between heaven and hell—the more we move away from God, Lewis observes, the more we enter into a realm of being that is only a “shadow” of the real. I find that insightful. My main complaint against Plato ‘s metaphysical dualism has to do with his insistence on a rigid “higher-lower” distinction between souls and bodies, with souls being more valuable than things physical. To buy into that perspective wholesale—adopting, say, Plato’s view that “the body is the prison-house of the soul”—fosters a view of human nature that lacks full appreciation of our psychophysical wholeness, to say nothing of failing to honor the delight which God takes in his non-human creation. Furthermore, it fails to acknowledge the richness of the New Testament’s references to “spirit” and “flesh.” Biblically speaking, married love can be “spiritual” when it is grounded in a relationship that is directed toward glorifying God, while taking pride in one’s capacity for abstract reasoning can be “fleshly.”

It is that kind of nuancing that has long inspired me to be a bit wary of calls for “the de-Hellenization” of Christian theology. I sense no overall mandate to “get the Hellas out of” my theological formulations. More appropriately, I think that we do well to distinguish between a radical de-Hellenization—”If it is Greek, get rid of it”—and a moderate de-Hellenization—”If it is Greek and wrong, get rid of it.”

Given these concerns, I was pleased to follow Nathan Gilmour’s excellent critique a while back of the way Brian McLaren has blamed a lot of what McLaren finds distasteful in evangelical theology on the influence of Plato and Aristotle—McLaren insists, for example, that the creation-fall-redemption-eschaton that many are fond of is an example of Platonism gone wild. Gilmour rightly observes that McLaren’s rejection of these “Greek” influences is a case of “playing fast and loose with historical identifications for the sake of scoring cheap rhetorical points.”[1]

An even more blatant case, however, showed up recently in a comment by William Paul Young, responding to an interviewer’s questions in the March issue of Christianity Today. In explaining his portrayal of the deity as a black woman in his bestselling novel, The Shack, Young reported: “I don’t want my kids growing up with the image of God that I had—Plato’s white grandfatherly god. That god is not a very good father. You can’t trust him with your kids.”

I have no desire to defend the image of a “white grandfatherly god”—but to attribute that depiction to Plato? In Plato’s thought there are two entities that have served as candidates for a theological understanding of the deity: the Form of the Good, which transcends all other Forms, and which in Plato’s own account can only be reached by an upward mystical journey; and the demiurge, who fashions finite things out of eternal matter, in subservience to the rationally discernible Forms. In short: for Plato we have to choose between the highest Being who is not a creator, and a creator who is not the highest Being. Either way, though, no beard and no skin!

Maybe it is time, for starters, to initiate a re-Hellenizing of Plato’s philosophy, as a first step toward paying him the compliment of being more careful in assessing his influence on Christian thought.

Richard J. Mouw will retire in June from the presidency of Fuller Theological Seminary.

1. christianhumanist.org/chb/2010/02/a-new-kind-of-christianity-a-review-for-the-ooze-viral-blogs

Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromRichard J. Mouw

Helen Rittelmeyer

Memories of a British education in Kenya.

Page 1476 – Christianity Today (20)

Books & CultureApril 24, 2013

If things had gone slightly differently for Kenyan author Ngũgĩ wa Thiong’o, he might have become a living, breathing vindication of the British Empire’s good intentions. Despite having grown up in straitened circumstances on his mother’s small agricultural homestead, he was selected on the basis of his test scores to attend a prestigious Rugby-style boarding school for Africans twelve miles outside Nairobi—this at a time when there were not many peasant sons of single mothers at Rugby back in Britain. His earliest attempts at fiction were promoted by the colonial government’s Literature Bureau, which sponsored various publications and contests for native authors, and he first came to the attention of European audiences when he won the bureau’s fiction prize. Later he wrote radio plays for the BBC. At every step of his early career, from the mid-1950s to the early ’60s, Ngũgĩ was encouraged and underwritten by well-meaning Englishmen who sincerely believed that Africans not only could be but should be educated in the same way as Europeans. Theirs was the old civilizing mission, and for a time it seemed that Ngũgĩ was poised to become a shining example of their success.

Page 1476 – Christianity Today (21)

In the House of the Interpreter: A Memoir

Ngugi wa Thiong'o (Author)

256 pages

$13.99

Then, at some point in his young adulthood, Ngũgĩ stopped barreling down the trajectory laid out for him and veered off in a different direction. He embraced Fanon, Marx, and Mao. He visited the Soviet Union. He abandoned the name he had been given at baptism, James Ngũgĩ, and started going by his traditional Kikuyu patronymic. After independence, he agitated against the continued inclusion of English authors in the Kenyan school curriculum—an ironic turn for a man who had been professor of English literature at the University of Nairobi. Later in life he adopted as his personal crusade the cause of traditional African languages, deriding as sell-outs those African authors like Chinua Achebe who wrote in the language of the oppressor. He became, in effect, a crusader against the English language in Africa. A more decisive repudiation of his early education could hardly be imagined.

That early education is the subject of the second volume of his memoirs, In the House of the Interpreter, which opens on his first year at boarding school and closes shortly after his graduation. Like its predecessor, Dreams in a Time of War: A Childhood Memoir, the book is largely apolitical, and he avoids the trap of projecting his adult opinions onto his teenage self, an especially admirable accomplishment for a writer with such strong ideological commitments.

Instead Ngũgĩ focuses on ordinary schoolboy stuff like the personalities of the masters and head boys, the sermons at chapel, and how he made his name in the school debate club. In many ways In the House of the Interpreter has more in common with British prep-school reminiscences like Roald Dahl’s and George Orwell’s than with African memoirs like Wole Soyinka’s and Camara Laye’s. The only thing differentiating this book from its Etonian counterparts is the absence of tortures and privations—no savage beatings, no aggressively unpleasant living quarters, no conscription into semi-slavery at the hands of upperclassmen. The biggest complaint Ngũgĩ has is the growing suspicion that his native culture is being humiliated, that alienation from his family and his past is not just the effect of his education but its purpose. The examples he offers of this are fairly minor, but perhaps the whole point of the book is to demonstrate that these small humiliations can be harder to bear than being caned for flubbing your Latin declensions.

In an early chapter, Ngũgĩ tells of how his English teacher, on the first day of class, took the boys up to his bungalow and walked them through the basic elements of an Englishman’s home: the fixtures in the bathroom; the devices in the kitchen; the furniture in the parlor—specifically, which ones were for sitting on and which not. The lesson ended with a grand tour of dinner table etiquette, precepts which were not immediately embraced by his pupils. “How would we eat githeri, irio, and ugali with forks and knives?” Ngũgĩ writes. “The pleasures of eating ugali lay in touch and taste: dipping fingers into the smoking dish and letting it cool in your mouth.” Another student explains that their teacher “was talking about English food and English manners,” to which their teacher responds that, quite the opposite, “table manners had no race or color. Good manners, like cleanliness, were pathways to God.”

One can think of many good reasons why this teacher might have chosen to start the year with a lesson so condescending (though not as condescending as you might think; most of his pupils really hadn’t seen a toilet before). This was the most selective black secondary school in Kenya, and a feeder school for the most prestigious university in East Africa. He probably didn’t want these boys, when they became government ministers or international businessmen, to discomfit tablemates from England—or for that matter France, Japan, and Russia—by slurping their soup and lunging for the salt. To this argument a mature Ngũgĩ might respond that, as visitors, these foreigners should adapt themselves to African table manners. The young Ngũgĩ could only bristle at the implication that his ways were barbaric and his teacher’s somehow more civilized.

Most of these humiliating gestures had their root less in active hostility toward African culture than in simple failures of understanding, on both sides. The first debate club meeting Ngũgĩ attended tackled the motion Should Germany’s colonial claims be accepted by Britain—a serious and challenging topic, no doubt, but one that indicates real tone-deafness in whoever selected it for a group of black students in the middle of the Mau Mau insurgency. (Though it does suggest that, far from infantilizing their students, European educators in Kenya were willing to cultivate their political awareness.)

The irony, Ngũgĩ says, was evident to them at the time—but then so was the irony of an anti-imperialist speech he himself gave in favor of the resolution Western education has done more harm than good. “I held a pencil in the air. All eyes were fixed on it. I told a story. A person comes to your house. He takes your land. In exchange he gives you a pencil. Is this fair exchange? I would rather he kept his pencil and I kept my land.” He then admits, “The contradiction was clear: all of us, for and against the motion, were at Alliance in pursuit of the Western education we had censured.”

The tenderness with which Ngũgĩ describes his schooldays suggests that he does not regret his education quite as much as his politics would oblige him to—he may be adamant that future Kenyans should never be forced to learn Shakespeare, but he is not really sorry that he was. He reserves his greatest tenderness for the headmaster, Edward Carey Francis, obe, a man who “accepted Jesus as the center of his life” and instilled in his students an attitude of service. The title of the book comes from a story out of Pilgrim’s Progress that was the subject of one of Francis’ best sermons.

Ngũgĩ contrasts the “Franciscan” way with that of the Billy Graham crusaders, who he joined for a short period: “Christianity for [Francis] was like a long-distance race, and he often talked of pacing oneself …. For him acts and conduct that proclaimed faith were more important than words that shouted belief.” Ngũgĩ first grew disillusioned with the evangelicals when he felt himself looked down on for not having a sufficiently shameful narrative of his pre-Christian immorality—they only respected you if you had been really debauched before, he felt, which was probably unfair to the crusaders—and he abandoned them completely when one of their star converts got a girl pregnant and then refused to marry her. The ethic of Carey Francis he never abandoned at all.

For those familiar with his passionate Marxism, it seems a little strange that Ngũgĩ should be so soft toward a man who, according to his philosophy, was an agent of oppression. Perhaps he has learned to see things from his old headmaster’s point of view; perhaps he has simply mellowed with age. Whatever it is that has inclined Ngũgĩ to view his experience of colonialism with warmth and some small measure of forgiveness, there is certainly a lesson in it.

Helen Rittelmeyer is a blogger for First Things. firstthings.com/blogs/helen-rittelmeyer

Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromHelen Rittelmeyer
Page 1476 – Christianity Today (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Lilliana Bartoletti

Last Updated:

Views: 5451

Rating: 4.2 / 5 (73 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Lilliana Bartoletti

Birthday: 1999-11-18

Address: 58866 Tricia Spurs, North Melvinberg, HI 91346-3774

Phone: +50616620367928

Job: Real-Estate Liaison

Hobby: Graffiti, Astronomy, Handball, Magic, Origami, Fashion, Foreign language learning

Introduction: My name is Lilliana Bartoletti, I am a adventurous, pleasant, shiny, beautiful, handsome, zealous, tasty person who loves writing and wants to share my knowledge and understanding with you.