On Robin Williams, Tragedy, and Thumper’s Mommy’s Rule

Since the actor and comedian Robin Williams died two days ago, there have been a multitude of tributes aired on television networks and posted online. Mostly they extol his quick wit, his devastatingly satirical humor, and his dramatic presence onscreen. As of this writing, his death has been attributed to suicide resulting from depression, so others have used this opportunity to focus on that mental disease. Also, given that his death occurred during a time of violent conflict in the Middle East and heightened tensions with Russia, not to mention anticipation of an ideologically charged election a few months hence, other less complimentary media has blown off Mr. Williams’ suicide as insignificant compared to larger events, or characterized it as cowardly, selfish, and particularly reprehensible considering his immense wealth and prestige. This latter vein of commentary is disturbing.

I understand the motivation to pay tribute to a popular figure. Through his movies and other public appearance, Mr. Williams has influenced a lot of people–chiefly by making them laugh. Many of his jokes and one-liners have entered into our common lexicon. People admired him, I guess, because his comedy uplifted their spirits. We sympathized with his confusedly righteous entertainer in Good Morning, Vietnam, we laughed at his comically entertaining everyman in Mrs. Doubtfire, and we drew wisdom from his portrayal as a counselor in Good Will Hunting. It’s no surprise that we should be shocked by his death, at his own hands, and apparently because of the omnipresent sadness, hurt, and anger of depression. The very nature of the event–popular and widely-reported–gives us the opportunity to reflect on the role laughter, sadness, and death play in our own perception of our lives. I confess that his comedy seemed a little wacky to me, so I am (unfortunately) not as affected by his death as others. But why spit on those who do, in fact, grieve?

Demeaning his death, or the attention lavished on it, sends a clear message that any grief felt for it is worthless. That is manifestly not true. Grief is the product of tragedy; any event which shocks us and provokes us to contemplate our own mortality, even vicariously, is tragedy. Mr. Williams’ death is one of many which happen every day, and perhaps one of the least gruesome. Certainly he did not die due to indiscriminate rocket fire, or beheading for being something other than a Muslim. The fate of nation-states does not hang in the balance because of his suicide. But his death is no less tragic for seeming lack of context. Christian doctrine, to which I subscribe, teaches that every person has inherent dignity because they are intimately created, loved, and valued by God, and therefore Mr. Williams’ death, even at his own hands, and even if he is rich and famous, is objectively a diminution of all of us–equally so as the death of a non-Christian in Iraq, or a Palestinian in Gaza, or a Ukrainian Soldier. The loss of a life is certainly much worse than a disliked piece of legislation or an unfavorable election result. As to his depression, I’ll be the first to agree that there are more immediately threatening issues than depression before us–but the relative importance, for whatever reason, of other issues does not diminish the cause of eradicating or mitigating depression (or any other mental illness). I personally grieve for Mr. Williams, more so because I have known his contributions to our culture and laughed with him. That makes the tragedy of his death more present to me than the death of others, and so it has a greater impact on me. There’s no question that Mr. Williams’ death is a tragedy, and he–along with those who loved him, which include his family and his fans–deserves our pity and compassion by virtue of the humanity he shares with us.

The negative reactions to this event raises the question of why we sometimes disbelieve people when they tell us about themselves. I don’t mean when people boast, or curry sympathy, or otherwise seek attention–I mean when they tell us their experiences. Many people who suffer from depression have written about it, and psychologists and psychiatrists alike have documented a pattern of symptoms and results leading of this clearly defined mental disease. Apparently Mr. Williams suffered from it. It is ludicrous to contradict that diagnosis on the barest speculation, as some have done by pointing out that he was a comic, or that he was wealthy, or that he was influential. Those things, nice as they are to be, do not have relevance on mental illness any more than they do on cancer or the common cold. I won’t conjecture whether there’s a connection between comedians and depression, but I do question why some angrily reject that such mental illness can occur in certain people. Can’t they imagine anyone being depressed if they’re rich?

Whatever the reality, second-guessing the experience of others is odious. To use a well-documented issue as an example, some question whether homosexuals really experience same-sex attraction as part of their nature. Why wouldn’t we believe someone who says that about him- or herself? Unless we have a similar frame of reference–i.e. we’ve experienced same-sex attraction ourselves–then we literally cannot understand what that’s like, and cannot judge the truth or falsehood of it. Any glib, ideologically-aligned causes we propose for homosexuality are mere speculation. In rejecting that aspect about another person, we are essentially demeaning them and all who share that experience by denying them personal agency and self-knowledge. Similarly, if one does not suffer depression, then rejecting Mr. William’s mental illness or that it could cause suicide is demeaning to him and all who suffer the same disease. That’s especially true for the self-styled academics who comfortably theorize that suicide is a selfish act and (if they’re religious) a sin. While the experiences of those afflicted with depression attest to both a physical aspect (i.e. a physical defect in the brain, or the operation of the brain) and a mental/spiritual element, scientists and theologians both admit they are very far from understanding the human mind. Therefore commentary on whether Mr. Williams’ suicide was a poor choice or an inevitable result of the disease is only more speculation. On top of that, who among us could say he or she knew Mr. Williams’ conscience, which seems more the point? God alone knows that. And finally, anecdotal evidence about someone falsely claiming depression–or any other sort of identity–in order to get attention is absolutely not sufficient reason to lack compassion. Any number of people who play the martyr by claiming depression, or who whine about the pressures of a life of fame, do not diminish the real thing. The only creditable source about Mr. Williams’ depression is Mr. Williams himself, and those who were close to him. It seems logical we would trust them.

No doubt those profess themselves offended by this suicide, or by all the attention spent on it, will respond to this post (if they read it) by asserting their right to believe and whatever they want. I don’t contradict that right. For my part, I’m certainly aware that I’m a poor source for information: I have no first-hand knowledge of Mr. Williams, nor could I improve upon the tributes written about him by better writers than I. I only remind the participants in this discussion that Mr. Williams had humanity and therefore dignity, as do all those saddened by his death. For that alone he and they are worthy of consideration and compassion. So please remember the rule of Thumper’s Mommy in Disney’s Bambi: If you can’t say something nice, don’t say anything at all–and leave those who grieve Mr. Williams’ death and reflect on their own mortality in respectful peace.

Advertisements

Reflections on the proper age of marriage

In all the many relationship discussions I’ve had and/or observed, it seems that age is considered one of the biggest factors in the decision or advisability of the relationship–especially if the relationship is marriage. Whenever people talk about someone else’s marriage make the age of the two married people a central issue. Maybe the commentary is positive–they married at the right time. Maybe the commentary is negative–they married too young, or (increasingly) they were too set in their ways; the second of which is a way of saying they waited too long, or maybe that they got too old.

Regarding age as a critical ingredient in marriage success (or relationship compatibility) has always ‘stuck in my craw’ a bit. It feels like one of the many blithe assumptions that come easy to us when explaining our own superiority, like a conventional belief that relationships in the 1950s and 1960s were all loveless, patriarchal shells of a family with an absent and philandering father. All right, maybe I exaggerate a bit there. Certainly few believe that all 1950s relationships (or any historical relationships) were loveless. Yet I suspect that many of us feel just a little bit lucky that we don’t live in the bad old days of arranged marriages, commercial exchanges to accompany weddings, and 14-year-old brides. Despite all that, however, I just can’t believe that the majority of relationships were unhappy or stilted. People loved each other back then, too. I’ll be careful here: I’m not saying that the bad old marriage conventions should be revived; I’m proud to live in an age where spouses choose each other freely and where either can be the breadwinner or caretaker as their fancy (and economic realities) take them. Even if we’ve made improvements socially since then, it doesn’t follow that our forbearers were unhappy. In fact, there are reasons to believe people might have been happier in those benighted old days of crusty tradition, sexual repression (or, depending on who you ask, aggression) and male dominance. They worked shorter hours on average, and slept more, than we do–both of which cause us increased stress and health problems.

In any case, I’m unconvinced that we are better off socially in the 2010s than we were in the in the past. That’s the nice thing about the past, if you have a point to prove: it is easily molded into a structure fitting your preferred narrative. Its easy to make a sweeping assertion that families were stronger back then, or that women were more repressed back then; both are true. Some things have improved; others have degraded. Comparisons are dangerous, because they are usually in the service of prejudices, like our particular prejudice about marrying young.

Statistics tell us that rural and/or less educated people marry younger than their urban, educated brethren; also the average age of marriage has risen from those terrible (but healthier!) olden days. And I often hear (maybe I sense?) a high degree of self-congratulation about that fact, which is funny, because up until about the last 10 years, marriages were becoming steadily less successful, indicated by a rising divorce rate. So we’re doing better because we’re marrying later, but our marriages are less successful? I’m not following. Certainly, some have argued that the rising divorce rate was a good thing, believing most marriages were unhappy because they were essentially coerced. Yet however went the marriage, a divorce is the breaking of a strong relationship that carried a lot of hope and promise, and so it seems likely that many (or most) divorces were bitter and painful. Maybe the practice of marrying older isn’t the social victory we think.

But wait! It would be ridiculous to go back to marrying upon graduation from high school. That is beyond doubt. Who is ready for something that after high school? I was certainly not ‘ready’ for marriage by the time I completed high school? If I’m honest with myself, I think it’s better to say that I was not even ‘capable’ of marriage. I was shockingly self-absorbed and my thoughts were consumed with a) whether I had the right friends and/or girlfriend, b) how I could get the most out of college (and we’re not just talking academically here), and c) agonizing about who I was. You know, the important things. Should I listen to Guster? Can I get away with just a T-Shirt and jeans, because I think it’s so much more chill? Is it ok if I enjoy my classes, or should I make myself enjoy partying more? There was barely enough room in my life for myself, let alone a life partner.

Ridiculous indeed. I doubt anyone would argue that. But by historical standards I was pretty immature for my age. I was 17 years old, able to drive, and almost able to vote. Do you think I was ready to select the most powerful person in the world in an election? I can scarcely believe they let me vote, considering my mental state. But I was not alone. Nearly everyone I know was at a similar maturity level upon their high school graduation. We had been kids for a long time, whose only real responsibilities were…nothing. Homework? Please. Most of us found ways out of it. Summer jobs? Doesn’t count, really–we were usually only making money to finance our weekend plans. For our entire lives thus far, in classes and sports teams and music and school plays, we were totally isolated in a world made for kids. And we weren’t done yet: we still had college to attend. Most of us were big kids, intellectually adults but emotionally (and socially) very young.

By comparison, children who grow up in rural areas, or who grew up in a culture which emphasized community and family, such as the socially backwards past, were probably much more mature than we were at the same age. It’s more likely they were vital contributors to their families, either by helping out the breadwinner with his/her business, or caring for siblings, or doing serious chores (like home maintenance or farm work). They lived in smaller communities, and had more relationships with adults (friends’ parents, aunts and uncles, grandparents, neighbors, etc.). Those 18-year-olds occupied a much less striated society, where they had to have become adults socially by the time they were in their mid teens. Upon the age of high school graduation they were actively part of a community, certainly deserved the right to vote. More importantly, they could also be a good partner in the community of a marriage.

There are great structural advantages to marrying at that tender age. Neurological studies have shown that one’s brain continues to develop until their mid-20s. More importantly, the cognitive functions of the brain usually finish development by about 16-18 (adulthood!), and the moral values and judgment functions of the brain develop after, finishing between the ages of 24-26. In fact, the reason teenagers believe they are invincible has been shown to be linked to the fact that their brains have not fully developed the capacity of judgment, which makes it harder for them to comprehend the risks they take. And while some, if not most, will argue that it’s irresponsible to marry when you haven’t even finished developing your personalities, I will turn the argument on its head and suggest that the best foundation for a lasting relationship is to develop similar values together by shaping each other’s moral growth.

Biologically, people between the ages of 18 and 25 are at their most fertile. Males produce the most testosterone, and therefore the most sperm at that age; females at the same age produce the most estrogen, and have the easiest time conceiving–which seems like a cruel joke, considering that we view that period in our lives as the most undesirable for marriage and starting a family. That age is for exploration, we say, it’s for discovering yourself! Partying! Traveling! There’s no doubt about it–all of those things are easy and fun when we’re in our early twenties. Do you remember how we thought nothing of going on little to no sleep, had no idea what a hangover was, and couldn’t understand the need to diet. We were beautiful, invincible, unstoppable; the world was our oyster. But as a parent in my 30s I will note wistfully that those physical advantages would be very helpful when dealing with children. When I’m chasing my toddler around, or when I have to get up to comfort the baby, I yearn for the energy I had in my 20s.

But of course it’s not a good idea to marry young these days. A college diploma (or at least a tech school certification) is more or less required to find work, and I can’t even imagine what college would be like as a newly-married person (and not just the social aspect; think about beginning a marriage with that kind of debt). But more practically–for the marriage part, anyway–is the fact that no high school graduate I’ve ever known is emotionally capable of marriage. The schooling process, along with popular media, has kept them from any sort of social or real responsibility and instilled in them the fervent, insidious belief that hedonism and wanton self-discovery are the essential components to a happy youth. Have fun! Enjoy college! Date many people! The expected result, of course, is that by a fun process of elimination these 20somethings will find the perfect job and partner, and settle down happily and much later.

These are generalizations, of course. And I am not trying to write a “kids these days,” fist-shaking rant. It’s long been fashionable to blame society for these developments, as if society were some kind of entity with intentions on us. Unfortunately, however, society does not make us (or our kids) do things. It has no intentions or opinions. It just is. And it is made up of us. It is merely the institutions formed out of our cultural perspectives. We think it’s important for kids to be kids, so we have created institutions which keep our kids in a school until they are 18, and take up their free time with sports and music and drama extracurriculars. We as a culture value self-discovery and self-actualization, so we institutionally establish these things in the emphasis on college, or the explosion of self-help books, or our worship of adventures and extreme sports. We value sexual actualization, too, so institutionally we accept more present sexuality and eroticism in things like television, music, and advertisements. The effect of these cultural perspectives is not the fault of our institutions (schools, media, etc.) any more than it’s the fault of a piece of wood that it was made into a chair. We share cultural perspectives; society results.

But frankly we needed our 20s. From my own experience, marriage requires contribution and unselfishness. I’m pretty sure the majority of my peers (and I) did not possess those virtues sufficiently in our 20s to have successful marriages. We still had to learn to support ourselves in the ‘real world,’ to be a part of a work team, to rely on others. Until then, we had parents and teachers and college staff to back us up. We also had to learn by trial and error how to take care of another person, because the school pipeline insulated us somewhat from observing other successful marriages by keeping us in our own age groups. It’s certainly plausible that our parents and grandparents, or kids growing up in rural areas learned all these things during their childhood, in more integrated social groups. But not today. Today, we have our 20s for that.

The fact remains that to be successful in a relationship, we must develop a certain maturity. So those who argue the doctrine of waiting for a bit, in order to mature, are wise. But maturity is not tied to a certain age. One may be less mature at 30 than some are at 18 (watch The Bachelor and see what I mean). And though everyone knows that maturity is only one piece of a great marriage–I’m not sure anyone has adequately explained the romantic longing, or fierce desire, or deep contentment with and for one other person that characterizes the love which leads to and sustains marriage without invoking Grace–I am concerned here with practicalities. Practically, marriages require partnership and respect. Maybe it would be nice to learn those things fully in our first 18 years, for we could happily and successfully marry then, deal with exhausting young children at the peak of our physical capabilities, and skip off to travel the world in our forties (which, at this day and age, practically constitute our healthiest decade of life!). Not an unpleasant prospect.

But our culture makes this near-impossible. So the point of all this rambling is: carry on. We all need a little growing before we marry (successfully). But from someone who has taken that step into marriage, I’ll tell you that it is much better than my early 20s. I’m glad I made it.

Aurora, Santa Barbara, and Waseca as an invitation to reflect

Last night my wife’s friend joined a news show panel a big TV network, so of course we tuned in to “cheer her on” through the screen. The subject was John LaDue, the upper-middle-class, never-been-bullied, no-reason-to-ever-go-wrong, almost-perpetrator of yet another violent, tragic school shooting.

He, of course, is only the latest in a line of demographically similar young men who have, for reasons yet under debate, become violent. The Aurora shootings shocked us because the location and event seemed vaguely symbolic: a movie theater, at the premier of a much-anticipated movie claiming to delve into the darkness of the human soul. The Santa Barbara killings angered us because the killer wrote elaborate fantasies about being violent, especially toward the women who unfairly denied him sex and the men who received in his stead. John LaDue’s planned violence stands out because the police stopped it–and because his matter-of-fact assertion that he felt mentally ill, that he wanted to kill his peers and hold out until taken down by SWAT, is a chilling glimpse into psychopathy.

The talking heads of the panel were all very unsympathetic towards young Mr. LaDue. The talked about how he was “simply evil,” “beyond rehabilitation” and the like, while the host sagely agreed. They may be right, of course, though I hesitate on principle to presume what someone might do out of respect for certain legal protections on which the United States are founded, but by and large I agree with them: Mr. LaDue ought to be charged with all the crimes associated with planning such a terrible deed (conspiracy to commit murder comes to mind).

It was interesting that they referred to previous, similar crimes–which actually took place–almost as aggravating circumstances. As if the fact that similar spree killings in the recent past somehow made his planned attack worse. It might just have been a trick of phrase; I’m fairly sure the commentators simply wanted to draw attention tangentially to this mystery of young men, from what we collectively consider to be “good” homes, who slowly and without concealment develop a rage and desire to kill, and then execute that desire despite a host of teachers, counselors, and peers who warn against them. I think it’s wonderful that the police caught Mr. LaDue, and if that was the result of a greater awareness of such crimes, then bravo to the talking heads. But the whole exercise in condemnation seemed to be dodging the main issue.

I suppose it’s natural to vent frustration on Mr. LaDue. He did, after all, plan to murder as many of his classmates as he could and (he hoped) some cops sent after him as well. And as a large portion of spree killers end up dead by their own hand, it’s satisfying to finally have someone to punish–especially if he is a better receptacle of our anger than James Eagan Holmes, the Aurora Theater shooter, who presented convincingly as a complete psychopath, and who showed all amusement and no remorse for the court proceedings against him.

Yet I wonder how much of the anger directed at people like Mr. LaDue and Mr. Holmes is to assuage our own consciences. I wonder how much of the condemnation and indignation, however superficially righteous, serves to draw a distinction between us and them; to say in essence, “the spree killer is evil and I am not, therefore get him away from me into jail and then death.” Perhaps shock and anger sometimes mask the relief people feel that they know what is “bad” when they see these spree killers, and it is not them. Perhaps too much of the talk about such men–easy laments about the decline of our society, titillated surprise that the scions of upper-middle-class stability, satisfying outrage at expressions of psychopathy and misogyny–is disassociation.

This bears some discussion. After all, the young men in question grew up among us. They received the same stimuli from media and from our pervasive culture as we have, and they had all the material things they needed. Clutching our pearls and wondering in bemusement how such criminals and terrible crimes could occur is the easy way out, a safe way to avoid hard questions about our own behavior–or at least our participation in a social behavior–which may have (at least) set the stage for a spree killing. Worse is to use these events to forward a philosophical or socio-political agenda, like the opposing crusades of the NRA (which seems to want to arm all teachers) and those who advocate total gun control. It’s ludicrous to think that arming teachers or taking away all guns would somehow solve the problem. The problem isn’t the weapons or lack thereof, it’s that young men decide to spree kill and then do it. They can do it with sticks, steak knives, home-made explosives, or bows and arrows. The problem is that they do it, and it’s our problem because in important ways the perpetrators are similar to us.

At this point I’m sure many readers have rejected this train of thought. They angrily proclaim that bad people exist, and that bad people will always exist, and that there’s absolutely no similarity between the sickoes that spree kill in schools and the rest of us law-abiding Americans. They may angrily point out that only young men have ever committed spree killings, and so it’s not a problem for women in our society. They may passionately argue that if nobody had access to guns, nobody would be able to kill so randomly. Or they may simply brindle at the suggestion that they are anything like the monsters that kill, and decide they don’t really want to discuss it any further. But if so, these readers are taking the easy way out. They are disassociating. They are saying that the problem of spree killing is not their problem, because spree killers are wholly alien. They would rather be right, ultimately, than make the sacrifice of compassion to see if there is any way such killers could be reduced.

Nearly every recent spree killer has come from the same demographic makes a mockery of coincidence. Nearly every spree killer has come from, and targeted, the influential middle class. Nearly every spree killer has evinced rage, most notably the Santa Barbara killer who (horrifyingly) seemed to actually believe that mere fact of others having sexual relationships was a violation of his rights. And nearly every spree killer seems to want attention–they choose schools and movie theaters and prominent universities as their tableau, knowing that they will earn headlines and time on “The Situation Room” and endless panels of talking heads like the one I saw last night.

That, actually, may hold the key to the problem. Attention. Why do spree killers want attention? Attributing it to their generation, as many do, is doubtful–otherwise more entitled millennials (in full disclosure, I’m a millennial too) would turn to violence. No, I would guess that spree killers want attention for the same reason that normal people develop a need for attention: some kind of fundamental, developmental neglect.

Now before people break out the mocking tears and sneer about mommies and daddies not loving their children enough, consider: first, numerous studies have shown that young girls without a close relationship to their parents are statistically more like to engage in promiscuity, drug use, and other risky behaviors; and second, studies into gang membership/affiliation (male and female) cite lack of dedicated parents as a prime causal. It’s not about whining on a daytime talk show, it has been studied and proved that neglected children have a higher propensity towards clinically anti-social behavior. And I have unfortunately met too many middle-class or wealthy parents who are more interested in the next vacation destination, or the new episodes of Mad Men, or in their own jobs, than in their children. Though it looks like stay-at-home-parenting is on the rise, the teenagers and young adults of today are perhaps the generation most commonly dumped into daycare so that parents could have satisfying careers and social lives.

Where it comes to males in all of this, to young men, is a sort of generalized neglect. Wait, hear me out. I know that across the board, women make less than men for similar work. I know that there exists an insidious “motherhood” penalty in the workplace. I think that as the gap between the wealthy and the rest of us has grown, life across that gap on the wealthy side has preserved and protected the old male-dominated social architecture. But back here, in real life, important changes are taking place: compared to men, women collectively get better grades in school, participate in more extracurricular activities (including sports), attend college at higher rates, and in many cases are more readily hired. These are all very good things, and hopefully a harbinger of true equality in the workplace.

Other investigative journalism indicates, however, that laudable attempts to push women to higher social achievements have unintentionally marginalized men. “Socially acceptable” extracurriculars in high school have shrunk to a few high-profile sports in order to spend equally on women’s teams. Universities faced with a majority of female students have invested money in programs of study and student life infrastructure which cater specifically to women. Companies hoping to achieve a certain diversity actively pursue female employees. And I wonder if maybe developmental authority figures like teachers have become mostly female, and less interested (understandably) in focusing on traditionally male interests like war. None of this is to blame the system, but rather to suggest that the intersection of parental neglect and social neglect may be a place frighteningly devoid of normal social obstacles to psychopathy, narcissism, and spree killing.

Obviously not all neglected children turn to violence. And women almost never turn to violence, perhaps because they usually have less aggression due to lower testosterone (though there are exceptions, of course). But I think it no accident that most spree killers commit their deed(s) after puberty, and they all seem to be seeking attention and revenge. Attention, maybe because they never got it; revenge, likely against those who refused to pay attention to them (or suitable surrogates). And I also think it telling that spree killers are usually characterized as loners, and notably lack the comfort and restraint of a social group–a family or a team–to draw them towards good social relationships. Maybe they aren’t necessarily born loners, but possibly are made loners by their development. I wonder if the anger and hatred that many women sense, in catcalls (check out #NotJustHello on twitter) and sexual dominance (#YesAllWomen), isn’t rooted in this cauldron of socially marginalized young men. And I wonder whether a parent, a mentor, a teacher, a friend who cared about [insert name of spree killer] might not have made the difference.

I don’t advocate sympathy for any spree killer. It is for the good of society that they be charged and punished to the full extent of the law. I also don’t advocate some kind of large-scale enterprise or campaign to remedy social wrongs. I suspect that by the time spree killers start exhibiting the signs (posting YouTube rants, rage-filled blogs, and so on) it’s too late for intervention and time for police involvement. But I invite us all to not wring our hands, spit out righteous rhetoric, and go about our daily business, comfortably believing these events have nothing to do with us. I invite us to take the hard road and try to see the killers with compassion, and hopefully to see a way that we can, in the future, make a difference.

Some thoughts on the words “Faith” and “Religion”

I recently saw an article that claimed Islam wasn’t a religion. There have been high-profile debates between religious leaders and scientists about which perspective contains more truth. There have even been debates within faith communities, as between Christian sects who acknowledge gay marriage, and those who don’t. It seems that somewhere in the diatribes we’ve collectively lost an understanding of what it means to have “faith” or how to define a “religion.”

Religion indeed seems a difficult thing to define. Christians, by and large, regard it as a free exercise of will to believe. No matter where you come from, if you believe what’s written in the Gospels regarding Jesus Christ, you are a Christian. Certain more conservative groupings, however, treat Christianity as a sort of ‘social contract,’ binding those within the group to act and value certain things. My extremely limited experience with Judaism indicates that certain conservative Jews have exclusionary believes about their religion–namely, that it accrues only to the children of Jewish mothers. Less conservative Jewish sects appear to regard Judaism as more of an ethnic identity than a belief system, happily accepting agnosticism or downright atheism among their peers as long as the overarching identity remains.

If my understanding of Judaism is “extremely limited,” then my understanding of Islam is not even worth mentioning. The so-called “fundamentalists” (a charged word, in that it implies that the fundamental tenets of a religion are bad, instead of perhaps a tangential tenet of the religion) treat Islam as a socio-political system, in which laws protecting the status quo are given legitimacy by (it is believed) divine approbation. The status quo in many Islamic countries in the Middle East is, at least regarding the dignity and attendant rights of women and children, oppressive and even barbaric in light of our liberal ideals. Opposition to that system strikes me as more akin to opposing Communism or Fascism insofar as it’s a political system. Islam in that sense is very different than Christianity and Judaism, and rightly condemned.

In the sense of religion, on the other hand, the issue is murky precisely because we use the word “religion” to describe different things. There are Muslims who practice Islam as a free exercise of the will to believe in Allah and the teachings of the Koran. I’ve never read the Koran, so I don’t know if it is filled to the brim with hateful writings, loving writings, or (as is the case with the Jewish and Christian scriptures) a mixture of both. There are other Muslims who probably practice Islam as a social contract, a way of distinguishing their group from others. But using religion to describe the entire practice of Islam, Judaism, or Christianity confuses things, and probably lets unlawful behavior proceed under the First Amendment while simultaneously restricting legitimate religious practice.

By and large, the test for “freedom of religion” ought be simple. If a behavior is lawful in a non-religious context, then it should be permitted as a religious practice. If I may display statues on my lawn, then I may display a Nativity scene at Christmas. If I may wear as much clothing as I’d like, as long as I’m not indecent, then I may wear a hijab or burkha. As a side note, Middle Eastern Christian (some of which who subordinate themselves to either the Pope or the Patriarch) and Jewish sects direct that female adherents wear hijabs. If assaulting someone is illegal, then I should not be able to stone or otherwise injure a person for engaging in lawful sexual behavior. It’s more difficult when trying to decide whether a person should be forced into religious participation, even tangentially. But that sort of question is why we have legislatures and courts.

The word “faith” seems misused as well. The dictionary defines faith as, “1) confidence or trust in a person or thing; 2) belief that is not based on truth; 3) belief in God or in the doctrines or teachings of religion; 4) belief in anything, as a code of ethics, standards, or merit.” I think the first definition hits closest to the mark on the intent of the word. A religious person, you might say, has confidence and trust in the tenets of his/her religion. The thing is, that attitude seems to apply to a lot of non-religious people too.

There are many voices trying to put faith and/or religion in the same category as ignorance and barbarism. That saddens me because I happen to be religious, of course, but it also strikes me as disingenuous and dishonest. As a Catholic I believe that Jesus Christ was the Son of God, and that He emptied Himself to become like us and share in our struggles on this earth, and that as He was killed He offered Himself as reparation for all our sins (past, present, and future), and that His offer was worthy because of His own perfection, and so I believe that if I follow Him I will be free of this earth and with Him in paradise. In analyzing that long narrative sentence it is immediately obvious that I could offer no empirical evidence of this. Even if I had a time machine and could record video of Jesus becoming incarnate in the womb of the Virgin Mary, then record all of His miracles, then record His crucifixion and leave the camera in the tomb recording the moment of His resurrection, there is still no way to see and record the thoughts of God, nor attach the camera to Jesus during His ascension into heaven and remotely view the video. My senses are unable to even gather that ‘behind the scenes’ evidence, even if I could prove by two chemical tests on controlled samples of water (for example) that it turned into wine. Therefore I must either have confidence that the narrative is true, or not.

This is not all that different, say, than belief in the Theory of Evolution. Nobody has a time machine that would enable them to bring back irrefutable evidence of evolution, perhaps by filming the birth and maturation of the first Cro-Magnon person with two Neanderthal parents (complete with genetic testing to compare to the remains of both species already cataloged). All we do have is snapshots of evidence, which we believe to be of a certain age, based on the belief that we can tell the age by extrapolating chemical deterioration, which only a few of us have ever observed with our eyes in a microscope (and I’m not sure it’s even possible to observe radiation decay). There is a narrative suggested by these snapshots of evidence–the oldest remains being more ape-like, the newer ones more human-like–but it is the invention of scientists and authors. Therefore I must either have confidence that the narrative is true, or not.

We’ve so far ignored the question of the chicken or the egg. Certain scientists, for example, claim that emotion is merely the work of certain hormones in a human brain. Feelings of arousal are due to release of sex hormones, which (it is theorized) are triggered when presented with a set of conditions, like say a procreatively attractive human of the gender which the subject of arousal finds attractive. Feelings of affection are due to the hormone Oxytocin, which is triggered in certain situations as a hardwired social response, which our genes have developed to increase our rate of survival by causing us to work together. But that is a hypothesis. It is plausible, too. But it is also unprovable. It’s equally plausible (and possible) that such hormone activity is the result of emotions–the mechanism or vehicle by which feelings manifest themselves physically (as arousal or tears). None of us can go inside our brains to determine the exact causal order of whether the emotion is received first, or whether the hormones are released first. Therefore I must have confidence that either one narrative is true, or the other.

The scientist Neil deGrasse Tyson famously noted, “the good thing about science is that it’s true whether you believe in it or not.” With respect, I beg to differ. There were a great many scientists who believed in Eugenics between 1880 and 1945 (including Margaret Sanger) along with luminaries like H.G. Wells, Theodore Roosevelt, and George Bernard Shaw. Eugenic research was funded by the Carnegie Foundation and the Rockefeller Foundation.* By “believe in Eugenics,” I mean its proponents believed that there was a genetic cause which disposed certain people toward poverty, retardation, sexual deviance (i.e. homosexuality), and antisocial behavior. Science was not true in that case, and we shouldn’t be so quick to conveniently compartmentalize that into the “the funny old days when we had silly theories” and “the evil things Nazis did, from which we saved the world.” Science is only as true as the ethics and character of the people who do it, much like religion. One commonality between the two ‘sides’ is that authority figures in both realms–scientists and priests–are only human, and subject to the same propensity to self-deceive and enjoy attention as the worst Hollywood celebrities or politicians.

Ultimately, faith comes down to what inspires confidence. My experience has taught me confidence both in the religious salvation narrative and in the scientific narrative of the world. As another author pointed out, there is not much difference between the big bang theory and the Christian explanation that God said, “Let there be light.” In both cases, our fantastically complicated universe exploded into something without warning or apparent material cause. What does it matter whether one believes it happened randomly or at the will of an entity too big to imagine?

Understanding and meaningful engagement with others demands a certain rigor of thought. Proponents of rational explanations fall into hypocrisy when they succumb to the “blind faith” that others who disagree with their perspective are somehow less important because they are “religious,” and proponents of religious-faith-based explanations fall into hypocrisy when they fail to acknowledge the faith that rationalists have in science-based narratives. It might advance both sides of this odd little culture struggle if we all recognized our own “religious” and “faith” tendencies, including those with no affinity towards and/or opposition to an established religion.

Memorial Day Remembrance, 2014

I wrote this speech to deliver to the Village of Kohler, Wisconsin, as part of their 2014 Memorial Day parade and ceremony.

Memorial Day is dear to Americans because it isn’t about us. Simply put, if we are here to celebrate it, then it isn’t about us — because we are alive to remember. It honors the achievement and sacrifice of our countrymen and women whose service required their very life.

As a Marine, the stories of my forbearers who gave their lives in service are legendary to me. Nearly any Marine can tell you the story of Lieutenant Bobo. Quoting from his Medal of Honor citation: “When an exploding enemy mortar round severed Second Lieutenant Bobo’s right leg below the knee, he refused to be evacuated and insisted upon being placed in a firing position to cover the movement of the command group to a better location. With a web belt around his leg serving as a tourniquet and with his leg jammed into the dirt to curtail the bleeding, he remained in this position and delivered devastating fire into the ranks of the enemy attempting to overrun the Marines.” That occurred in Viet Nam in 1967.

A more recent example is Corporal Dunham. His Medal of Honor citation relates, “…[A]n insurgent leaped out and attacked Corporal Dunham. Corporal Dunham wrestled the insurgent to the ground and in the ensuing struggle saw the insurgent release a grenade. Corporal Dunham immediately alerted his fellow Marines to the threat. Aware of the imminent danger and without hesitation, Corporal Dunham covered the grenade with his helmet and body, bearing the brunt of the explosion and shielding his Marines from the blast.” This occurred in Iraq in 2004.

These young Marines, and their sacrifice, live on in the institutional memory of the service. I first encountered Lieutanant Bobo’s name in 2003, when I underwent Officer Candidate School in Quantico, Virginia. It was the name of our Chow Hall, a place of great importance to us candidates, and our Drill Instructors never wasted an opportunity to tell us the story of the hall’s namesake (usually as part of a larger diatribe regarding our worthlessness and general incapacity to become Marines. Ah, the sweet nurturing environment of Basic Training!). Enlisted Marines also learn about Lieutenant Bobo in their Boot Camp. I know that in time, buildings and roads on bases throughout the Marine Corps will bear the name of Corporal Dunham, and newer generations of Marines will learn about — and be inspired by — his heroic deeds as well.

These two stories from different wars show us that the decision to give what President Lincoln called “the last full measure of devotion” at Gettysburg (arguably the first Memorial Day celebrated by this nation) is not made in the moment of stress. Lieutenant Bobo would not have had the fortitude to resist evacuation and direct the fight after losing his leg unless he had already decided, in some deep unconscious center of his soul, that he would give his all for his country. Corporal Dunham could not have jumped on that grenade “without hesitation” and within the five-second fuse of such weapons, had he not already chosen — in the months and years of training and operations prior to that moment –that the success and integrity of his mission and his team were more important than his own life.

This day is set aside to celebrate our nation’s fallen, but not only their final heroic deed of service. It celebrates also their lives, for each of them had the character and courage to dedicate themselves wholly to the rest of us long before we collectively asked them to sacrifice themselves. They represent the best of these United States, the ones who have made our existence and prosperity possible: the Minutemen who faced British cannon and muskets in 1775; the 2nd, 6th, and 7th Wisconsin Volunteer Regiments who as part of the famed Iron Brigade defended the high ground west of Gettysburg on the first day of that battle, enabling the rest of the Union Army to emplace and finally score a victory which led to the preservation our nation whole; the Soldiers and Marines who faced the unprecedented peril of amphibious landings at Normandy and throughout the Pacific; the heroes of Viet Nam and recent conflicts in the Middle East.

Today I remember the Marines I knew personally who died in service. Some, like Lieutenant Blue, died in Battle. He was as an outstanding officer, who routinely aced physical and tactical tests at The Basic School where we were classmates. He was also known as a “good dude” (in our lingo), which meant he was the kind of guy who would give up weekends to help his fellow students master testable skills, like marksmanship and compass navigation. He already had what the rest of us recent college graduates were struggling to develop: outstanding character. In training, he had all the talent and drive to graduate as the number one student, but chose instead to use his gifts to help his fellow students (and even so he graduated in the top 10% of our class). Our success was more important to him than his own. If anyone understood the importance of character and service at the tender age of 25, when he was killed by a roadside bomb in Iraq (2007), it was Lieutenant Blue. Word of his death spread quickly among his classmates, even to those like me who had limited interaction with him during our short time in school together. I believe he was the first of our class to die in the conflict, and he proved the old adage “the good die young.”

I also remember Marines who died in Training. A fellow fighter jock of mine, Reid Nannen, died this year [2014] when his F/A-18 Hornet crashed into the mountains of Nevada, where he was training at the Naval Fighter Weapons School (otherwise known as “Top Gun”). His callsign, or nickname, was “Eyore” because he was always comically pessimistic, but it under-laid his solemn unwavering dedication to the craft of aerial combat and aviation ground support, which had earned him the rare and coveted spot at Top Gun in the first place. He was also known for his dedication to his family, and was survived by his pregnant wife and three children. Although he was only training, it’s easy to forget that  our service members assume serious risk beyond what most non-military folks ever encounter in just training for combat. And it’s important to note that his family served our country in a way as well, suffering his absence when the country needed him to get ready for war as well as execute it, as he did in Afghanistan, and suffering his loss in the deepest way. Memorial Day is for them, too.

We celebrate the men and women who have died for us because we recognize that the highest and best use of freedom is in the service of others. Some wars we fought to carve out and preserve a spot of freedom on the earth to call home, these United States, and some wars we fought to bring freedom to others. But the men and women who died in our wars swore their lives to protect that freedom, firstly for us, but also for others less fortunate. I ask you all, as I would ask any of our countrymen, to enjoy this day as Americans — enjoy our freedom, our happiness, and our prosperity at the dawn of summer. Enjoy barbecues, enjoy some pick-up basketball games, and enjoy this time with your families. Enjoying our blessings is how I believe fallen service members want us to remember them.

But while enjoying this Memorial Day holiday, I will also honor the fallen with a quiet personal toast of my beer. I invite all of you to do the same.

On the Harry Potter Books

Whenever I ask one of my peers if they have read the Harry Potter books, I often hear derision in their response. “That’s not my kind of book,” they say, or else: “I think all the attention is silly;” “It’s stupid that people are so obsessed about it;” and “I’m not really into fantasy or children’s books.” To distort things further, many Harry Potter apologists defend it by proclaiming how “dark” the later books became, as if an element of darkness in the story suddenly makes it better or more worthwhile. The high public visibility of the series has reduced it’s effect to trends–the fantasy trend, the popularity trend (viewed as good or bad), the “dark storytelling” trend, and so on. This is a disservice to J.K. Rowling’s fine books.

(The following discussion contains “spoilers.”)

The story of Harry Potter, told in seven separate books, is essentially a fairy tale. Like most fairy tales, there is a wondrous or magical element to the setting, though (also like most fairy tales) the setting is also familiar to us readers. The story isn’t particularly dark, either: it can be frightening and sad, but the goodness of the main characters is evident throughout as they struggle to do the right thing in each book (and mostly succeed). There is no question that their situation becomes more dire from book to book, as the evil they’re fighting gets stronger in proportion to their increasing maturity. But rather than “dark,” the books become more adult in theme and content–though never “adult” in a sexual or pornographic sense. The much-talked about deaths of several “main characters” along the way merely add a dimension of tragedy, a reminder that Harry and his friends are struggling against forces that are in fact very dangerous and cruel.

The series is a tour de force as an extended novel and a bildungsroman. J.K. Rowling did a great job of tying up loose ends from the main plot and all the sub-plots. Her theme of Love comes to full fruition in the final book with Harry’s willingness to die for his loved ones (much like his mother’s similar willingness seventeen years before–which is arguably the causal factor of the entire storyline). Though it was terribly sad that characters like Tonks and Lupin and Dobby and Fred died–I was especially stricken when Colin Creevey, the youngster who irritatingly worshipped Harry in The Goblet of Fire, died after staying to fight, even though he was underage–the tragedy was balanced by the redemption of certain other characters, such as Severus Snape.

In a literal sense, the series ended as all fairy tales do with a “happily ever after.” To be sure, I wanted more detail about Harry and Ginny, Ron and Hermione, and their children…but really, just knowing that they were still friends, still happy together, and moving on in their lives was enough to satisfy me that the hurts of Voldemort had been healed. And that was the point, wasn’t it? Harry was always just looking to be normal and happy.

Morally, the stories are very clear. The children–especially the three main characters–constantly try to do right in the face of obstacles, which take the form of temptations to selfishness, direct threats on their life, and cruel adolescent other students. Though their efforts seem at times pointless or futile, in each book they (to some degree) succeed. More importantly, Rowling avoids the literary cliche of having a “chosen one” by explaining very carefully at the end of the sixth book that it is Harry’s choice to face up to Voldemort, rather than some pre-ordained destiny. The choice is forced on him (rather unfortunately) by Voldemort’s own obsession and misunderstanding–it is the effect of an evil person rather than a supernatural event that drives events in Harry’s life. It’s worth noting that for his part, Harry consistently chooses on his own to face the enemy. He never, in the end, avoids the confrontation or gives up. The vehicle of success, redemption, and goodness throughout the series lies not in magic or destiny but in moral choices.

I mentioned earlier that the theme of the series was Love. It runs through all these larger plot points and events as the source of all good relationships, and it drives the events because of the extraordinary friendship Harry shares with his friends. Their continual success against Voldemort is directly attributable to their combined efforts, which springs from their care for one another rather than merely shared purpose. And, in the end, it is only Harry’s Loving decision to give his life for his friends that enables him to finally defeat Voldemort. Literally and figuratively, Love conquers the death, fear, and despair that Harry and his friends must face throughout the series.

Rowling also deals heavily in the theme of redemption, which surfaces quietly in the early books–think how Sirius redeems his dark, evil family through his service and friendship to Harry–and becomes inescapable in the last. With the exception of Voldemort himself (and his particularly evil henchmen), every “bad” character to some measure redeems himself–Malfoy, a bully with a particular hatred of Harry, has the grace in the end to turn his back (however halfheartedly) on Voldemort, and to quietly allow Harry to save his life. Percy Weasley, who disowned his family to serve his own ambition, apologizes and returns to their side in the final battle. We learn that Dumbledore, perhaps the most staunchly good character of the entire book, was in fact tempted by Dark Magic early in his life, though he obviously repented early enough to discover Voldemort and set up his demise. But it is in Professor Snape’s story that we see the most redemption: the touching and powerful tale of a man who loved Lily so much that he could protect and aid her son even though that son looked like his father, the man Snape (perhaps) hated most in the world.

Although not revealed until later in the series, it becomes clear from Snape’s interactions with Dumbledore (seen in the memories he gave Harry immediately prior to his death) that much of his cruelty at Hogwarts was an act to lend verisimilitude to his allegiance with the Death Eaters–he shows his real colors when he corrects one of his portrait-henchmen at Hogwarts from using the equivalent of a racist epithet: “don’t use that word [“mudblood”]!” Also, he aids Harry throughout the series: early on by attempting to foil the curse of an unseen enemy during a quidditch game; then by trying to teach Harry the difficult art of Occlumency; and especially in the last book by sending his patronus to the wood to lead Harry to Gryffindor’s sword. Snape was a bitter, lonely young man, desperate to fit in and be liked, a tremendously competent wizard, and for all of these reasons sorely tempted by Dark Magic–a near perfect prospect for the Death Eaters. Yet he was redeemed by his love of Harry’s mother to the point of fighting thanklessly throughout the entire series to protect her son and defeat Voldemort.

The Harry Potter books are ultimately ennobling. They teach, entertainingly, that doing the right thing, sticking together with friends, and confronting evil when necessary will lead to a “happily ever after.” But the stories are much richer than a tale of good triumphing over evil. Each one is a cleverly constructed mystery novel, wherein the mundane details the characters’ lives (which are intriguing because we come to love the characters so much) conceal vital clues to the overarching problem of the novel, and many characters are not what they seem–an example of this is the case of Sirius Black in The Prisoner of Azkaban. Each book by itself is also a bildungsroman, like (as already mentioned) the entire series, in which Harry, Ron, and Hermione (and to a lesser extent Neville, Ginny, and Luna) grow up and become more complete persons. Indeed, part of their attraction to us as characters is their endearing and familiar adolescent struggle to like themselves, to gain friends, to fit in, and to succeed. Finally, by embedding the magical world in our own, Rowling has also added witty, amusing, and sometimes devastating satire.

Great literature addresses the great questions of humanity, such as why we exist, what we should do, and how we can be happy. Rowling has offered a compelling answer to these questions through the Harry Potter books. Along the way, she has crafted seven exciting stories that are introspective, funny, tragic, affirming, and ennobling. Her books, though perhaps not as profound, yet stand comparison to The Chronicles of Narnia, and The Lord of the Rings. They are a valuable addition to the canon of English books, and they deserve better than a reduction to “children’s literature,” “young adult literature,” “fantasy literature,” “popular literature,” or any other kind of sub-category. They are simply Literature (with a capital “L”).

The Power of Imagination

On a recent flight from Washington, DC to San Diego I had the fortune of watching the movie Bridge to Terabithia. It didn’t seem like fortune at the time, though. I was frankly disappointed that a more exciting movie wasn’t playing. Crammed into a small airline seat, forced to sit still for four hours or so, I wanted to watch something with action and drama and even romance, not some fantasy movie for children. But I didn’t feel like reading, so I plugged in my headphones and decided to give it a chance.

As a part of the programming, the airline included a review before the movie actually started. Interestingly, one reviewer remarked that he had the same misgivings as I did about the movie before he saw it, but ended up pleasantly surprised. He also mentioned that there were some significant dramatic themes, including a death. At the time, his comments didn’t really make me more excited to watch the movie myself, but I remembered them later.

I will probably spoil the movie for those of you who haven’t watched it, so if you really want to see it, move on to the next post. The main character, Jesse, is a young boy with four sisters. His family is struggling to get by, and his parents have too much on their mind to pay much attention to him. The movie is chiefly about his pre-adolescent struggles, and how he learns to deal with difficulties in life, which for him take the form of school bullies, a demanding and rigid father, and an annoying little sister (she actually loves him very much and because of that seems clingy to him).

The movie begins on the first day of sixth grade. Jesse has been practicing all summer so he can be the fastest kid in his class in the opening field day race. Unfortunately, he his mother insists that he wear his sister’s hand-me-down pink sneakers, for which classmates will tease him–but that’s pretty normal for Jesse, since his family can’t afford to buy many new things. During the race, he beats everybody in his class except a new girl named Leslie. What makes it worse for him is that she seems extremely interested in being his friend. He puts her off initially, partially bitter from the race he lost but also partially because she is new and different. Like him, she is an outcast–and the two eventually become friends.

Leslie discovers that Jesse has a passion for drawing. He keeps a notebook filled with drawings of imaginary creatures and events, though he is very shy about it. And with her encouragement, they begin to visit the woods behind their houses every day after school, developing in their imagination a fantasy world which they protect and rule. It is Leslie that instigates the imaginative part; Jesse is at first skeptical, reluctant, and derisive. Their world, Terabithia, is sophisticated: the children bring in the problems they face at school and at home and re-create them as evils threatening Terabithia, and likewise project their role and king and queen into their everyday lives, teaming up to get the better of bullies and help other students. Terabithia is a visible manifestation of the childrens’ friendship and a means by which they can romanticize their sufferings and make them meaningful. And while their sufferings might be considered trivial compared with adult problems, as children at the very beginning of puberty they feel disappointment, regret, and frustration with all the clarity of innocence. The movie clearly presents the childrens’ problems as a microcosm of our (the viewers’) own.

Later in the movie, Jesse is been invited to visit an art gallery with a teacher on whom he has a crush. For that reason, he doesn’t invite Leslie. When Jesse returns from the gallery he finds his parents sick with worry and senses something wrong: they tell him that Leslie went to the woods (Terabithia) by herself, and when crossing a swollen stream fell in and drowned. It is a terrible scene. Jesse can’t believe it at first and runs to her house, only to find ambulances and police cars and sympathizers already in attendance–including the teacher he was with that day. Wracked with guilt, he tells the teacher that they should have invited Leslie.

At this point in the story, I realized I had read the book before. It had been when I was very young, and I chiefly remember that I cried. It is a tragic story: Jesse senselessly loses his best friend; he blames himself because he didn’t invite her to the museum; he watches the exciting world they dreamed together and its positive effect on real life come crashing down about his ears. It was perhaps my first real encounter with grief. I sympathized with Jesse through the medium of the story–when he loses his temper with his little sister and pushes her to the ground, when he runs from his father into the woods, when he gets into a fight in school. I felt keenly the unlooked-for compassion of his parents (who struggle tell him that Leslie’s death wasn’t his fault) and his teachers, who tell him how lucky he was to have befriended Leslie and how special they thought she was. And, most importantly, I discovered that the gifts of others — in Jesse’s case, Terabithia — are the things that preserve their memory the best.

Bridge to Terabithia is far more than a simple children’s story. It is about dealing with suffering, death, and maturity. It reminds us that holding ideals with childlike clarity and abandonment is worthwhile. Jesse’s father, careworn as he is, recognizes this: “That girl gave you something very special, and to remember that is to keep her alive,” he tells Jesse. The value of a story like this recalls the great literary value of other “children’s stories,” such as the fables of Aesop and Hans Christian Anderson and the Chronicles of Narnia, which (like Bridge to Terabithia) continue to escape a more “mature” reading audience who decide they no time for childrens’ tales. And yet if joy, contentment, sorrow, pain, and frustration are the simplest emotions we feel, surely we feel them most keenly in our simplest frame of mind – when we are as little children. Even Jesus taught that we each should believe in Him and his message “as a little child.”

For if Terabithia is an ideal, so is the promise of Christianity. They are both firmly in the province of faith. And they both provide meaning to our daily trials and a goal to work toward in our daily labor. Keeping such an ideal before us despite the intrusion of dull or demanding tasks and obligations requires imagination–more specifically, it requires a simple, uncomplicated, unfettered imagination. It requires the imagination of a child. Such an imagination is the means by which we discern hope even amid our current suffering. That is what Bridge to Terabithia celebrates.