On Robin Williams, Tragedy, and Thumper’s Mommy’s Rule

Since the actor and comedian Robin Williams died two days ago, there have been a multitude of tributes aired on television networks and posted online. Mostly they extol his quick wit, his devastatingly satirical humor, and his dramatic presence onscreen. As of this writing, his death has been attributed to suicide resulting from depression, so others have used this opportunity to focus on that mental disease. Also, given that his death occurred during a time of violent conflict in the Middle East and heightened tensions with Russia, not to mention anticipation of an ideologically charged election a few months hence, other less complimentary media has blown off Mr. Williams’ suicide as insignificant compared to larger events, or characterized it as cowardly, selfish, and particularly reprehensible considering his immense wealth and prestige. This latter vein of commentary is disturbing.

I understand the motivation to pay tribute to a popular figure. Through his movies and other public appearance, Mr. Williams has influenced a lot of people–chiefly by making them laugh. Many of his jokes and one-liners have entered into our common lexicon. People admired him, I guess, because his comedy uplifted their spirits. We sympathized with his confusedly righteous entertainer in Good Morning, Vietnam, we laughed at his comically entertaining everyman in Mrs. Doubtfire, and we drew wisdom from his portrayal as a counselor in Good Will Hunting. It’s no surprise that we should be shocked by his death, at his own hands, and apparently because of the omnipresent sadness, hurt, and anger of depression. The very nature of the event–popular and widely-reported–gives us the opportunity to reflect on the role laughter, sadness, and death play in our own perception of our lives. I confess that his comedy seemed a little wacky to me, so I am (unfortunately) not as affected by his death as others. But why spit on those who do, in fact, grieve?

Demeaning his death, or the attention lavished on it, sends a clear message that any grief felt for it is worthless. That is manifestly not true. Grief is the product of tragedy; any event which shocks us and provokes us to contemplate our own mortality, even vicariously, is tragedy. Mr. Williams’ death is one of many which happen every day, and perhaps one of the least gruesome. Certainly he did not die due to indiscriminate rocket fire, or beheading for being something other than a Muslim. The fate of nation-states does not hang in the balance because of his suicide. But his death is no less tragic for seeming lack of context. Christian doctrine, to which I subscribe, teaches that every person has inherent dignity because they are intimately created, loved, and valued by God, and therefore Mr. Williams’ death, even at his own hands, and even if he is rich and famous, is objectively a diminution of all of us–equally so as the death of a non-Christian in Iraq, or a Palestinian in Gaza, or a Ukrainian Soldier. The loss of a life is certainly much worse than a disliked piece of legislation or an unfavorable election result. As to his depression, I’ll be the first to agree that there are more immediately threatening issues than depression before us–but the relative importance, for whatever reason, of other issues does not diminish the cause of eradicating or mitigating depression (or any other mental illness). I personally grieve for Mr. Williams, more so because I have known his contributions to our culture and laughed with him. That makes the tragedy of his death more present to me than the death of others, and so it has a greater impact on me. There’s no question that Mr. Williams’ death is a tragedy, and he–along with those who loved him, which include his family and his fans–deserves our pity and compassion by virtue of the humanity he shares with us.

The negative reactions to this event raises the question of why we sometimes disbelieve people when they tell us about themselves. I don’t mean when people boast, or curry sympathy, or otherwise seek attention–I mean when they tell us their experiences. Many people who suffer from depression have written about it, and psychologists and psychiatrists alike have documented a pattern of symptoms and results leading of this clearly defined mental disease. Apparently Mr. Williams suffered from it. It is ludicrous to contradict that diagnosis on the barest speculation, as some have done by pointing out that he was a comic, or that he was wealthy, or that he was influential. Those things, nice as they are to be, do not have relevance on mental illness any more than they do on cancer or the common cold. I won’t conjecture whether there’s a connection between comedians and depression, but I do question why some angrily reject that such mental illness can occur in certain people. Can’t they imagine anyone being depressed if they’re rich?

Whatever the reality, second-guessing the experience of others is odious. To use a well-documented issue as an example, some question whether homosexuals really experience same-sex attraction as part of their nature. Why wouldn’t we believe someone who says that about him- or herself? Unless we have a similar frame of reference–i.e. we’ve experienced same-sex attraction ourselves–then we literally cannot understand what that’s like, and cannot judge the truth or falsehood of it. Any glib, ideologically-aligned causes we propose for homosexuality are mere speculation. In rejecting that aspect about another person, we are essentially demeaning them and all who share that experience by denying them personal agency and self-knowledge. Similarly, if one does not suffer depression, then rejecting Mr. William’s mental illness or that it could cause suicide is demeaning to him and all who suffer the same disease. That’s especially true for the self-styled academics who comfortably theorize that suicide is a selfish act and (if they’re religious) a sin. While the experiences of those afflicted with depression attest to both a physical aspect (i.e. a physical defect in the brain, or the operation of the brain) and a mental/spiritual element, scientists and theologians both admit they are very far from understanding the human mind. Therefore commentary on whether Mr. Williams’ suicide was a poor choice or an inevitable result of the disease is only more speculation. On top of that, who among us could say he or she knew Mr. Williams’ conscience, which seems more the point? God alone knows that. And finally, anecdotal evidence about someone falsely claiming depression–or any other sort of identity–in order to get attention is absolutely not sufficient reason to lack compassion. Any number of people who play the martyr by claiming depression, or who whine about the pressures of a life of fame, do not diminish the real thing. The only creditable source about Mr. Williams’ depression is Mr. Williams himself, and those who were close to him. It seems logical we would trust them.

No doubt those profess themselves offended by this suicide, or by all the attention spent on it, will respond to this post (if they read it) by asserting their right to believe and whatever they want. I don’t contradict that right. For my part, I’m certainly aware that I’m a poor source for information: I have no first-hand knowledge of Mr. Williams, nor could I improve upon the tributes written about him by better writers than I. I only remind the participants in this discussion that Mr. Williams had humanity and therefore dignity, as do all those saddened by his death. For that alone he and they are worthy of consideration and compassion. So please remember the rule of Thumper’s Mommy in Disney’s Bambi: If you can’t say something nice, don’t say anything at all–and leave those who grieve Mr. Williams’ death and reflect on their own mortality in respectful peace.

Reflections on the proper age of marriage

In all the many relationship discussions I’ve had and/or observed, it seems that age is considered one of the biggest factors in the decision or advisability of the relationship–especially if the relationship is marriage. Whenever people talk about someone else’s marriage make the age of the two married people a central issue. Maybe the commentary is positive–they married at the right time. Maybe the commentary is negative–they married too young, or (increasingly) they were too set in their ways; the second of which is a way of saying they waited too long, or maybe that they got too old.

Regarding age as a critical ingredient in marriage success (or relationship compatibility) has always ‘stuck in my craw’ a bit. It feels like one of the many blithe assumptions that come easy to us when explaining our own superiority, like a conventional belief that relationships in the 1950s and 1960s were all loveless, patriarchal shells of a family with an absent and philandering father. All right, maybe I exaggerate a bit there. Certainly few believe that all 1950s relationships (or any historical relationships) were loveless. Yet I suspect that many of us feel just a little bit lucky that we don’t live in the bad old days of arranged marriages, commercial exchanges to accompany weddings, and 14-year-old brides. Despite all that, however, I just can’t believe that the majority of relationships were unhappy or stilted. People loved each other back then, too. I’ll be careful here: I’m not saying that the bad old marriage conventions should be revived; I’m proud to live in an age where spouses choose each other freely and where either can be the breadwinner or caretaker as their fancy (and economic realities) take them. Even if we’ve made improvements socially since then, it doesn’t follow that our forbearers were unhappy. In fact, there are reasons to believe people might have been happier in those benighted old days of crusty tradition, sexual repression (or, depending on who you ask, aggression) and male dominance. They worked shorter hours on average, and slept more, than we do–both of which cause us increased stress and health problems.

In any case, I’m unconvinced that we are better off socially in the 2010s than we were in the in the past. That’s the nice thing about the past, if you have a point to prove: it is easily molded into a structure fitting your preferred narrative. Its easy to make a sweeping assertion that families were stronger back then, or that women were more repressed back then; both are true. Some things have improved; others have degraded. Comparisons are dangerous, because they are usually in the service of prejudices, like our particular prejudice about marrying young.

Statistics tell us that rural and/or less educated people marry younger than their urban, educated brethren; also the average age of marriage has risen from those terrible (but healthier!) olden days. And I often hear (maybe I sense?) a high degree of self-congratulation about that fact, which is funny, because up until about the last 10 years, marriages were becoming steadily less successful, indicated by a rising divorce rate. So we’re doing better because we’re marrying later, but our marriages are less successful? I’m not following. Certainly, some have argued that the rising divorce rate was a good thing, believing most marriages were unhappy because they were essentially coerced. Yet however went the marriage, a divorce is the breaking of a strong relationship that carried a lot of hope and promise, and so it seems likely that many (or most) divorces were bitter and painful. Maybe the practice of marrying older isn’t the social victory we think.

But wait! It would be ridiculous to go back to marrying upon graduation from high school. That is beyond doubt. Who is ready for something that after high school? I was certainly not ‘ready’ for marriage by the time I completed high school? If I’m honest with myself, I think it’s better to say that I was not even ‘capable’ of marriage. I was shockingly self-absorbed and my thoughts were consumed with a) whether I had the right friends and/or girlfriend, b) how I could get the most out of college (and we’re not just talking academically here), and c) agonizing about who I was. You know, the important things. Should I listen to Guster? Can I get away with just a T-Shirt and jeans, because I think it’s so much more chill? Is it ok if I enjoy my classes, or should I make myself enjoy partying more? There was barely enough room in my life for myself, let alone a life partner.

Ridiculous indeed. I doubt anyone would argue that. But by historical standards I was pretty immature for my age. I was 17 years old, able to drive, and almost able to vote. Do you think I was ready to select the most powerful person in the world in an election? I can scarcely believe they let me vote, considering my mental state. But I was not alone. Nearly everyone I know was at a similar maturity level upon their high school graduation. We had been kids for a long time, whose only real responsibilities were…nothing. Homework? Please. Most of us found ways out of it. Summer jobs? Doesn’t count, really–we were usually only making money to finance our weekend plans. For our entire lives thus far, in classes and sports teams and music and school plays, we were totally isolated in a world made for kids. And we weren’t done yet: we still had college to attend. Most of us were big kids, intellectually adults but emotionally (and socially) very young.

By comparison, children who grow up in rural areas, or who grew up in a culture which emphasized community and family, such as the socially backwards past, were probably much more mature than we were at the same age. It’s more likely they were vital contributors to their families, either by helping out the breadwinner with his/her business, or caring for siblings, or doing serious chores (like home maintenance or farm work). They lived in smaller communities, and had more relationships with adults (friends’ parents, aunts and uncles, grandparents, neighbors, etc.). Those 18-year-olds occupied a much less striated society, where they had to have become adults socially by the time they were in their mid teens. Upon the age of high school graduation they were actively part of a community, certainly deserved the right to vote. More importantly, they could also be a good partner in the community of a marriage.

There are great structural advantages to marrying at that tender age. Neurological studies have shown that one’s brain continues to develop until their mid-20s. More importantly, the cognitive functions of the brain usually finish development by about 16-18 (adulthood!), and the moral values and judgment functions of the brain develop after, finishing between the ages of 24-26. In fact, the reason teenagers believe they are invincible has been shown to be linked to the fact that their brains have not fully developed the capacity of judgment, which makes it harder for them to comprehend the risks they take. And while some, if not most, will argue that it’s irresponsible to marry when you haven’t even finished developing your personalities, I will turn the argument on its head and suggest that the best foundation for a lasting relationship is to develop similar values together by shaping each other’s moral growth.

Biologically, people between the ages of 18 and 25 are at their most fertile. Males produce the most testosterone, and therefore the most sperm at that age; females at the same age produce the most estrogen, and have the easiest time conceiving–which seems like a cruel joke, considering that we view that period in our lives as the most undesirable for marriage and starting a family. That age is for exploration, we say, it’s for discovering yourself! Partying! Traveling! There’s no doubt about it–all of those things are easy and fun when we’re in our early twenties. Do you remember how we thought nothing of going on little to no sleep, had no idea what a hangover was, and couldn’t understand the need to diet. We were beautiful, invincible, unstoppable; the world was our oyster. But as a parent in my 30s I will note wistfully that those physical advantages would be very helpful when dealing with children. When I’m chasing my toddler around, or when I have to get up to comfort the baby, I yearn for the energy I had in my 20s.

But of course it’s not a good idea to marry young these days. A college diploma (or at least a tech school certification) is more or less required to find work, and I can’t even imagine what college would be like as a newly-married person (and not just the social aspect; think about beginning a marriage with that kind of debt). But more practically–for the marriage part, anyway–is the fact that no high school graduate I’ve ever known is emotionally capable of marriage. The schooling process, along with popular media, has kept them from any sort of social or real responsibility and instilled in them the fervent, insidious belief that hedonism and wanton self-discovery are the essential components to a happy youth. Have fun! Enjoy college! Date many people! The expected result, of course, is that by a fun process of elimination these 20somethings will find the perfect job and partner, and settle down happily and much later.

These are generalizations, of course. And I am not trying to write a “kids these days,” fist-shaking rant. It’s long been fashionable to blame society for these developments, as if society were some kind of entity with intentions on us. Unfortunately, however, society does not make us (or our kids) do things. It has no intentions or opinions. It just is. And it is made up of us. It is merely the institutions formed out of our cultural perspectives. We think it’s important for kids to be kids, so we have created institutions which keep our kids in a school until they are 18, and take up their free time with sports and music and drama extracurriculars. We as a culture value self-discovery and self-actualization, so we institutionally establish these things in the emphasis on college, or the explosion of self-help books, or our worship of adventures and extreme sports. We value sexual actualization, too, so institutionally we accept more present sexuality and eroticism in things like television, music, and advertisements. The effect of these cultural perspectives is not the fault of our institutions (schools, media, etc.) any more than it’s the fault of a piece of wood that it was made into a chair. We share cultural perspectives; society results.

But frankly we needed our 20s. From my own experience, marriage requires contribution and unselfishness. I’m pretty sure the majority of my peers (and I) did not possess those virtues sufficiently in our 20s to have successful marriages. We still had to learn to support ourselves in the ‘real world,’ to be a part of a work team, to rely on others. Until then, we had parents and teachers and college staff to back us up. We also had to learn by trial and error how to take care of another person, because the school pipeline insulated us somewhat from observing other successful marriages by keeping us in our own age groups. It’s certainly plausible that our parents and grandparents, or kids growing up in rural areas learned all these things during their childhood, in more integrated social groups. But not today. Today, we have our 20s for that.

The fact remains that to be successful in a relationship, we must develop a certain maturity. So those who argue the doctrine of waiting for a bit, in order to mature, are wise. But maturity is not tied to a certain age. One may be less mature at 30 than some are at 18 (watch The Bachelor and see what I mean). And though everyone knows that maturity is only one piece of a great marriage–I’m not sure anyone has adequately explained the romantic longing, or fierce desire, or deep contentment with and for one other person that characterizes the love which leads to and sustains marriage without invoking Grace–I am concerned here with practicalities. Practically, marriages require partnership and respect. Maybe it would be nice to learn those things fully in our first 18 years, for we could happily and successfully marry then, deal with exhausting young children at the peak of our physical capabilities, and skip off to travel the world in our forties (which, at this day and age, practically constitute our healthiest decade of life!). Not an unpleasant prospect.

But our culture makes this near-impossible. So the point of all this rambling is: carry on. We all need a little growing before we marry (successfully). But from someone who has taken that step into marriage, I’ll tell you that it is much better than my early 20s. I’m glad I made it.

Aurora, Santa Barbara, and Waseca as an invitation to reflect

Last night my wife’s friend joined a news show panel a big TV network, so of course we tuned in to “cheer her on” through the screen. The subject was John LaDue, the upper-middle-class, never-been-bullied, no-reason-to-ever-go-wrong, almost-perpetrator of yet another violent, tragic school shooting.

He, of course, is only the latest in a line of demographically similar young men who have, for reasons yet under debate, become violent. The Aurora shootings shocked us because the location and event seemed vaguely symbolic: a movie theater, at the premier of a much-anticipated movie claiming to delve into the darkness of the human soul. The Santa Barbara killings angered us because the killer wrote elaborate fantasies about being violent, especially toward the women who unfairly denied him sex and the men who received in his stead. John LaDue’s planned violence stands out because the police stopped it–and because his matter-of-fact assertion that he felt mentally ill, that he wanted to kill his peers and hold out until taken down by SWAT, is a chilling glimpse into psychopathy.

The talking heads of the panel were all very unsympathetic towards young Mr. LaDue. The talked about how he was “simply evil,” “beyond rehabilitation” and the like, while the host sagely agreed. They may be right, of course, though I hesitate on principle to presume what someone might do out of respect for certain legal protections on which the United States are founded, but by and large I agree with them: Mr. LaDue ought to be charged with all the crimes associated with planning such a terrible deed (conspiracy to commit murder comes to mind).

It was interesting that they referred to previous, similar crimes–which actually took place–almost as aggravating circumstances. As if the fact that similar spree killings in the recent past somehow made his planned attack worse. It might just have been a trick of phrase; I’m fairly sure the commentators simply wanted to draw attention tangentially to this mystery of young men, from what we collectively consider to be “good” homes, who slowly and without concealment develop a rage and desire to kill, and then execute that desire despite a host of teachers, counselors, and peers who warn against them. I think it’s wonderful that the police caught Mr. LaDue, and if that was the result of a greater awareness of such crimes, then bravo to the talking heads. But the whole exercise in condemnation seemed to be dodging the main issue.

I suppose it’s natural to vent frustration on Mr. LaDue. He did, after all, plan to murder as many of his classmates as he could and (he hoped) some cops sent after him as well. And as a large portion of spree killers end up dead by their own hand, it’s satisfying to finally have someone to punish–especially if he is a better receptacle of our anger than James Eagan Holmes, the Aurora Theater shooter, who presented convincingly as a complete psychopath, and who showed all amusement and no remorse for the court proceedings against him.

Yet I wonder how much of the anger directed at people like Mr. LaDue and Mr. Holmes is to assuage our own consciences. I wonder how much of the condemnation and indignation, however superficially righteous, serves to draw a distinction between us and them; to say in essence, “the spree killer is evil and I am not, therefore get him away from me into jail and then death.” Perhaps shock and anger sometimes mask the relief people feel that they know what is “bad” when they see these spree killers, and it is not them. Perhaps too much of the talk about such men–easy laments about the decline of our society, titillated surprise that the scions of upper-middle-class stability, satisfying outrage at expressions of psychopathy and misogyny–is disassociation.

This bears some discussion. After all, the young men in question grew up among us. They received the same stimuli from media and from our pervasive culture as we have, and they had all the material things they needed. Clutching our pearls and wondering in bemusement how such criminals and terrible crimes could occur is the easy way out, a safe way to avoid hard questions about our own behavior–or at least our participation in a social behavior–which may have (at least) set the stage for a spree killing. Worse is to use these events to forward a philosophical or socio-political agenda, like the opposing crusades of the NRA (which seems to want to arm all teachers) and those who advocate total gun control. It’s ludicrous to think that arming teachers or taking away all guns would somehow solve the problem. The problem isn’t the weapons or lack thereof, it’s that young men decide to spree kill and then do it. They can do it with sticks, steak knives, home-made explosives, or bows and arrows. The problem is that they do it, and it’s our problem because in important ways the perpetrators are similar to us.

At this point I’m sure many readers have rejected this train of thought. They angrily proclaim that bad people exist, and that bad people will always exist, and that there’s absolutely no similarity between the sickoes that spree kill in schools and the rest of us law-abiding Americans. They may angrily point out that only young men have ever committed spree killings, and so it’s not a problem for women in our society. They may passionately argue that if nobody had access to guns, nobody would be able to kill so randomly. Or they may simply brindle at the suggestion that they are anything like the monsters that kill, and decide they don’t really want to discuss it any further. But if so, these readers are taking the easy way out. They are disassociating. They are saying that the problem of spree killing is not their problem, because spree killers are wholly alien. They would rather be right, ultimately, than make the sacrifice of compassion to see if there is any way such killers could be reduced.

Nearly every recent spree killer has come from the same demographic makes a mockery of coincidence. Nearly every spree killer has come from, and targeted, the influential middle class. Nearly every spree killer has evinced rage, most notably the Santa Barbara killer who (horrifyingly) seemed to actually believe that mere fact of others having sexual relationships was a violation of his rights. And nearly every spree killer seems to want attention–they choose schools and movie theaters and prominent universities as their tableau, knowing that they will earn headlines and time on “The Situation Room” and endless panels of talking heads like the one I saw last night.

That, actually, may hold the key to the problem. Attention. Why do spree killers want attention? Attributing it to their generation, as many do, is doubtful–otherwise more entitled millennials (in full disclosure, I’m a millennial too) would turn to violence. No, I would guess that spree killers want attention for the same reason that normal people develop a need for attention: some kind of fundamental, developmental neglect.

Now before people break out the mocking tears and sneer about mommies and daddies not loving their children enough, consider: first, numerous studies have shown that young girls without a close relationship to their parents are statistically more like to engage in promiscuity, drug use, and other risky behaviors; and second, studies into gang membership/affiliation (male and female) cite lack of dedicated parents as a prime causal. It’s not about whining on a daytime talk show, it has been studied and proved that neglected children have a higher propensity towards clinically anti-social behavior. And I have unfortunately met too many middle-class or wealthy parents who are more interested in the next vacation destination, or the new episodes of Mad Men, or in their own jobs, than in their children. Though it looks like stay-at-home-parenting is on the rise, the teenagers and young adults of today are perhaps the generation most commonly dumped into daycare so that parents could have satisfying careers and social lives.

Where it comes to males in all of this, to young men, is a sort of generalized neglect. Wait, hear me out. I know that across the board, women make less than men for similar work. I know that there exists an insidious “motherhood” penalty in the workplace. I think that as the gap between the wealthy and the rest of us has grown, life across that gap on the wealthy side has preserved and protected the old male-dominated social architecture. But back here, in real life, important changes are taking place: compared to men, women collectively get better grades in school, participate in more extracurricular activities (including sports), attend college at higher rates, and in many cases are more readily hired. These are all very good things, and hopefully a harbinger of true equality in the workplace.

Other investigative journalism indicates, however, that laudable attempts to push women to higher social achievements have unintentionally marginalized men. “Socially acceptable” extracurriculars in high school have shrunk to a few high-profile sports in order to spend equally on women’s teams. Universities faced with a majority of female students have invested money in programs of study and student life infrastructure which cater specifically to women. Companies hoping to achieve a certain diversity actively pursue female employees. And I wonder if maybe developmental authority figures like teachers have become mostly female, and less interested (understandably) in focusing on traditionally male interests like war. None of this is to blame the system, but rather to suggest that the intersection of parental neglect and social neglect may be a place frighteningly devoid of normal social obstacles to psychopathy, narcissism, and spree killing.

Obviously not all neglected children turn to violence. And women almost never turn to violence, perhaps because they usually have less aggression due to lower testosterone (though there are exceptions, of course). But I think it no accident that most spree killers commit their deed(s) after puberty, and they all seem to be seeking attention and revenge. Attention, maybe because they never got it; revenge, likely against those who refused to pay attention to them (or suitable surrogates). And I also think it telling that spree killers are usually characterized as loners, and notably lack the comfort and restraint of a social group–a family or a team–to draw them towards good social relationships. Maybe they aren’t necessarily born loners, but possibly are made loners by their development. I wonder if the anger and hatred that many women sense, in catcalls (check out #NotJustHello on twitter) and sexual dominance (#YesAllWomen), isn’t rooted in this cauldron of socially marginalized young men. And I wonder whether a parent, a mentor, a teacher, a friend who cared about [insert name of spree killer] might not have made the difference.

I don’t advocate sympathy for any spree killer. It is for the good of society that they be charged and punished to the full extent of the law. I also don’t advocate some kind of large-scale enterprise or campaign to remedy social wrongs. I suspect that by the time spree killers start exhibiting the signs (posting YouTube rants, rage-filled blogs, and so on) it’s too late for intervention and time for police involvement. But I invite us all to not wring our hands, spit out righteous rhetoric, and go about our daily business, comfortably believing these events have nothing to do with us. I invite us to take the hard road and try to see the killers with compassion, and hopefully to see a way that we can, in the future, make a difference.

Memorial Day Remembrance, 2014

I wrote this speech to deliver to the Village of Kohler, Wisconsin, as part of their 2014 Memorial Day parade and ceremony.

Memorial Day is dear to Americans because it isn’t about us. Simply put, if we are here to celebrate it, then it isn’t about us — because we are alive to remember. It honors the achievement and sacrifice of our countrymen and women whose service required their very life.

As a Marine, the stories of my forbearers who gave their lives in service are legendary to me. Nearly any Marine can tell you the story of Lieutenant Bobo. Quoting from his Medal of Honor citation: “When an exploding enemy mortar round severed Second Lieutenant Bobo’s right leg below the knee, he refused to be evacuated and insisted upon being placed in a firing position to cover the movement of the command group to a better location. With a web belt around his leg serving as a tourniquet and with his leg jammed into the dirt to curtail the bleeding, he remained in this position and delivered devastating fire into the ranks of the enemy attempting to overrun the Marines.” That occurred in Viet Nam in 1967.

A more recent example is Corporal Dunham. His Medal of Honor citation relates, “…[A]n insurgent leaped out and attacked Corporal Dunham. Corporal Dunham wrestled the insurgent to the ground and in the ensuing struggle saw the insurgent release a grenade. Corporal Dunham immediately alerted his fellow Marines to the threat. Aware of the imminent danger and without hesitation, Corporal Dunham covered the grenade with his helmet and body, bearing the brunt of the explosion and shielding his Marines from the blast.” This occurred in Iraq in 2004.

These young Marines, and their sacrifice, live on in the institutional memory of the service. I first encountered Lieutanant Bobo’s name in 2003, when I underwent Officer Candidate School in Quantico, Virginia. It was the name of our Chow Hall, a place of great importance to us candidates, and our Drill Instructors never wasted an opportunity to tell us the story of the hall’s namesake (usually as part of a larger diatribe regarding our worthlessness and general incapacity to become Marines. Ah, the sweet nurturing environment of Basic Training!). Enlisted Marines also learn about Lieutenant Bobo in their Boot Camp. I know that in time, buildings and roads on bases throughout the Marine Corps will bear the name of Corporal Dunham, and newer generations of Marines will learn about — and be inspired by — his heroic deeds as well.

These two stories from different wars show us that the decision to give what President Lincoln called “the last full measure of devotion” at Gettysburg (arguably the first Memorial Day celebrated by this nation) is not made in the moment of stress. Lieutenant Bobo would not have had the fortitude to resist evacuation and direct the fight after losing his leg unless he had already decided, in some deep unconscious center of his soul, that he would give his all for his country. Corporal Dunham could not have jumped on that grenade “without hesitation” and within the five-second fuse of such weapons, had he not already chosen — in the months and years of training and operations prior to that moment –that the success and integrity of his mission and his team were more important than his own life.

This day is set aside to celebrate our nation’s fallen, but not only their final heroic deed of service. It celebrates also their lives, for each of them had the character and courage to dedicate themselves wholly to the rest of us long before we collectively asked them to sacrifice themselves. They represent the best of these United States, the ones who have made our existence and prosperity possible: the Minutemen who faced British cannon and muskets in 1775; the 2nd, 6th, and 7th Wisconsin Volunteer Regiments who as part of the famed Iron Brigade defended the high ground west of Gettysburg on the first day of that battle, enabling the rest of the Union Army to emplace and finally score a victory which led to the preservation our nation whole; the Soldiers and Marines who faced the unprecedented peril of amphibious landings at Normandy and throughout the Pacific; the heroes of Viet Nam and recent conflicts in the Middle East.

Today I remember the Marines I knew personally who died in service. Some, like Lieutenant Blue, died in Battle. He was as an outstanding officer, who routinely aced physical and tactical tests at The Basic School where we were classmates. He was also known as a “good dude” (in our lingo), which meant he was the kind of guy who would give up weekends to help his fellow students master testable skills, like marksmanship and compass navigation. He already had what the rest of us recent college graduates were struggling to develop: outstanding character. In training, he had all the talent and drive to graduate as the number one student, but chose instead to use his gifts to help his fellow students (and even so he graduated in the top 10% of our class). Our success was more important to him than his own. If anyone understood the importance of character and service at the tender age of 25, when he was killed by a roadside bomb in Iraq (2007), it was Lieutenant Blue. Word of his death spread quickly among his classmates, even to those like me who had limited interaction with him during our short time in school together. I believe he was the first of our class to die in the conflict, and he proved the old adage “the good die young.”

I also remember Marines who died in Training. A fellow fighter jock of mine, Reid Nannen, died this year [2014] when his F/A-18 Hornet crashed into the mountains of Nevada, where he was training at the Naval Fighter Weapons School (otherwise known as “Top Gun”). His callsign, or nickname, was “Eyore” because he was always comically pessimistic, but it under-laid his solemn unwavering dedication to the craft of aerial combat and aviation ground support, which had earned him the rare and coveted spot at Top Gun in the first place. He was also known for his dedication to his family, and was survived by his pregnant wife and three children. Although he was only training, it’s easy to forget that  our service members assume serious risk beyond what most non-military folks ever encounter in just training for combat. And it’s important to note that his family served our country in a way as well, suffering his absence when the country needed him to get ready for war as well as execute it, as he did in Afghanistan, and suffering his loss in the deepest way. Memorial Day is for them, too.

We celebrate the men and women who have died for us because we recognize that the highest and best use of freedom is in the service of others. Some wars we fought to carve out and preserve a spot of freedom on the earth to call home, these United States, and some wars we fought to bring freedom to others. But the men and women who died in our wars swore their lives to protect that freedom, firstly for us, but also for others less fortunate. I ask you all, as I would ask any of our countrymen, to enjoy this day as Americans — enjoy our freedom, our happiness, and our prosperity at the dawn of summer. Enjoy barbecues, enjoy some pick-up basketball games, and enjoy this time with your families. Enjoying our blessings is how I believe fallen service members want us to remember them.

But while enjoying this Memorial Day holiday, I will also honor the fallen with a quiet personal toast of my beer. I invite all of you to do the same.

Faith, Reason, and Debating the Existential “Big Questions”

I’m past college, and with those years has passed the incidence of earnest debate about things like religion and the meaning of life. That I attended a Catholic university and majored in a “Great Books” meant that I fielded my share of challenges from those who believed something different than I did, and one of the most pressing questions that came up at that time was why.

Why do you believe?

There is something fantastic and mythological, certainly, about the story of a God coming to earth in order to offer Himself up as a perfect, spotless sacrifice in order to atone for every human sin, past and future, and reconcile the human race to Himself as God. The particulars of the story are indeed quaint and uncomfortably sentimental: a sweet young woman chosen to miraculously conceive God’s child; archetypal authority figures hatching dastardly plots and darkly scheming to stop this bright young hero; a set of bumbling accomplices; an impossibly evil death; and the most mythical and unbelievable thing of all: that he was killed and then came back to life.

To my friends, well-educated and mostly liberal humanists, the tale of Christ bears too many similarities to the quaint myths of many other cultures, and is only the biggest myth in a child-like narrative of the world with a stylized creation story and a lot of horrible barbarities. Compared to sophisticated promise of modern disciplines like sociology, psychology, and specialized sciences, a primitive culture’s myth seems plainly archaic. How could anyone believe this, much less someone college-educated?

The challenge about answering this question is that it is ideological rather than academic. Those who ask it have a certain perspective which I don’t understand, but which seems to preclude the idea of a supernatural. Some profess to be humanists, who believe that continued enlightenment in sciences will eventually conquer our social and personal afflictions. Others profess to be rationalists, believing only in those things that science has proved or theorized.

Such alternative belief systems are not, in and of themselves, ideological. They fall more truly into the existential category, defining who we are and why we exist. But they seem to come with a lot of ideological baggage these days. After all, elements of our society today are unabashed and even aggressive apologists for faith (professing the Christian doctrine of sola scriptura) and many of them speak in terms of condemnation, specifically condemnation of those who disagree with them, to hell. They often stand for uncomfortably traditional values as well, like maintaining traditional gender and socio-economic roles. Now all of a sudden we aren’t talking about a different moral and existential perspective, we’re talking about an ideological opponent. And, to be fair, there are fundamentalist Christians who are offensive and judgmental in proselytizing their beliefs.

But to turn the tables, many so-called rationalists and/or humanists can be just as aggressive, and I am skeptical that their explanations of the world are actually more ‘rational’ than a faith-based one. It’s easy to talk about gravity or astronomical relations and say that we can “prove” real science empirically, but I doubt that many of us have empirically viewed the behavior of a virus, or the release of certain brain hormones causing affection or depression. We accept that viruses and brain hormones work a certain way because we have studied the effect of those things and measured them in actual humans, so we know they exist and they affect, somehow, our health or mental state. We also believe people called “scientists” when those people tell us about viruses and brain hormones (and the behavior of chemical elements, and many other things), because we have faith that their education and certification makes them intrinsically trustworthy on certain issues.

Whether or not you trust a scientist or a theologian (or a priest) is really the question, unless on. An Op-Ed in the Washington Post recently pointed out very thoroughly that the two sides are not mutually exclusive. I have little to add to the writer’s argument because I agree with him — I believe in the story of the Christ and yet also pursue understanding of scientific matters, because I want to know more about us and this world we inhabit. He ends with a marvelous paragraph worth quoting in full:

The problem comes when materialism, claiming the authority of science, denies the possibility of all other types of knowledge — reducing human beings to a bag of chemicals and all their hopes and loves to the firing of neurons. Or when religion exceeds its bounds and declares the Earth to be 6,000 years old. In both cases, the besetting sin is the same: the arrogant exclusive claim to know reality.

The answer to the question of why I believe the entirety of the Christian story, with it’s quaint mythological narratives about paradisiacal gardens and apples of knowledge of good and evil and floods and prophets and whales and the Son of God is that I find it more plausible than any of the alternatives. It really makes more sense to me. Not necessarily in they physical particulars (“do you really believe that some prophet actually parted water to create a passage?”), but in the tale it tells of how humanity became prone to doing bad things and how God then came Himself to redeem humanity from its sinful nature.

The Christian tale is plausible to me mostly because of my own experiences in sin and redemption. The vast majority of these experiences are with my own sins and redemptions in my life so far, and a few of them are observations of other peoples’ sins and redemptions. On a precious few occasions I recall witnessing a miracle, or experiencing a beatific presence I attribute to the Christian God. These things are open to interpretation in an academic sense, of course. Rationalists might argue that my experiences of good and bad in myself and others are filtered through a strong inculcated Catholic belief system. They might doubt that I, in fact, saw or experienced so-called “supernatural” things, and point to the demonstrated phenomenon of humans to manufacture memories that suit their subconscious perspectives. And as far as that goes, they may be right. I can’t transmit my experiences to others, so therefore I can’t expect anyone else to believe my conclusions. And yet I can no more forget them than an astronaut could forget his view of a round earth from space, or an astronomer could forget the sightings and calculations that the earth and nearer bodies revolved around the sun in elliptical trajectories.

My point here is not to convince anyone in my beliefs. I don’t think that’s possible — neither a rationalist nor a faith-based belief system can be truly transmitted via dialectic. Any belief system has to be experienced to be believed, personally and deeply experienced. And for a human, that means engaging both the intellect and whatever part of the brain controls belief.

Someone who believes that human emotions like love and depression are a combination of neuron activity and chemical activity in the brain has probably actively engaged the subject: he or she likely wondered why people experience love and other emotions, and pursued the answer until they found an explanation. That’s the activity of his or her intellect. He or she also had to exclude other explanations for emotions (presuming they found others), such as activity of a metaphysical soul, or instinctual behavior bred in by evolution, which is primarily a decision of faith. Does he or she trust neurologists who measure neuron activity and brain chemicals? Priests, philosophers, and/or wise men and women, who have reached a supernatural explanation due to their long experience in considering and/or observing human behavior? What about sociologists and/or biologists who study behavioral patterns and instinct activity?

Personally, I don’t believe that a scientist is intrinsically a better person than a priest or a philosopher. All three are human, which means they are subject to the same ideological myopia and vices, as well as the same inspiration and virtue, as the rest of us. No single person knows everything, and experience teaches that even if a person did, he or she would forget part of it, or hide part of it, or even use it to his/her advantage. Positing that it’s possible to know everything, and use that knowledge correctly, is coming dangerously close to positing God. Whether we follow to that conclusion, or stop short — and who/what we decide to trust and therefore believe — well, that’s just our obligation as rational beings. We each must individually decide what to believe.

It’s natural that each of us would seek like-minded friends in the world, and so it’s easy to see how we would gravitate towards those who believe the same things. So begins ideology, or the pursuit of actualizing an ideal, which carried to the extreme ends up forgetting that ideas are not more important than people — or so I argue as a Christian: that individuals have the highest intrinsic value; ideas may be valuable but they’re not worth more than life itself.

I plead that we don’t let this social instinct push us into prejudice. I and many people I know believe in the teachings of Christianity and yet also follow the progress of scientific knowledge. Many of these people are scientists or doctors themselves. And likewise, I know that people who religious faith (Christian or other) is irrational do not reduce the human experience to the peculiar behavior of a peculiar animal, enslaved to instinct and evolutionary imperative.

So let’s not discuss these existential issues of faith, science, reason, and belief with a desire to win, especially to win by painting other belief systems in pejorative colors. Rather let’s do it to better understand ourselves and each other.

On the Harry Potter Books

Whenever I ask one of my peers if they have read the Harry Potter books, I often hear derision in their response. “That’s not my kind of book,” they say, or else: “I think all the attention is silly;” “It’s stupid that people are so obsessed about it;” and “I’m not really into fantasy or children’s books.” To distort things further, many Harry Potter apologists defend it by proclaiming how “dark” the later books became, as if an element of darkness in the story suddenly makes it better or more worthwhile. The high public visibility of the series has reduced it’s effect to trends–the fantasy trend, the popularity trend (viewed as good or bad), the “dark storytelling” trend, and so on. This is a disservice to J.K. Rowling’s fine books.

(The following discussion contains “spoilers.”)

The story of Harry Potter, told in seven separate books, is essentially a fairy tale. Like most fairy tales, there is a wondrous or magical element to the setting, though (also like most fairy tales) the setting is also familiar to us readers. The story isn’t particularly dark, either: it can be frightening and sad, but the goodness of the main characters is evident throughout as they struggle to do the right thing in each book (and mostly succeed). There is no question that their situation becomes more dire from book to book, as the evil they’re fighting gets stronger in proportion to their increasing maturity. But rather than “dark,” the books become more adult in theme and content–though never “adult” in a sexual or pornographic sense. The much-talked about deaths of several “main characters” along the way merely add a dimension of tragedy, a reminder that Harry and his friends are struggling against forces that are in fact very dangerous and cruel.

The series is a tour de force as an extended novel and a bildungsroman. J.K. Rowling did a great job of tying up loose ends from the main plot and all the sub-plots. Her theme of Love comes to full fruition in the final book with Harry’s willingness to die for his loved ones (much like his mother’s similar willingness seventeen years before–which is arguably the causal factor of the entire storyline). Though it was terribly sad that characters like Tonks and Lupin and Dobby and Fred died–I was especially stricken when Colin Creevey, the youngster who irritatingly worshipped Harry in The Goblet of Fire, died after staying to fight, even though he was underage–the tragedy was balanced by the redemption of certain other characters, such as Severus Snape.

In a literal sense, the series ended as all fairy tales do with a “happily ever after.” To be sure, I wanted more detail about Harry and Ginny, Ron and Hermione, and their children…but really, just knowing that they were still friends, still happy together, and moving on in their lives was enough to satisfy me that the hurts of Voldemort had been healed. And that was the point, wasn’t it? Harry was always just looking to be normal and happy.

Morally, the stories are very clear. The children–especially the three main characters–constantly try to do right in the face of obstacles, which take the form of temptations to selfishness, direct threats on their life, and cruel adolescent other students. Though their efforts seem at times pointless or futile, in each book they (to some degree) succeed. More importantly, Rowling avoids the literary cliche of having a “chosen one” by explaining very carefully at the end of the sixth book that it is Harry’s choice to face up to Voldemort, rather than some pre-ordained destiny. The choice is forced on him (rather unfortunately) by Voldemort’s own obsession and misunderstanding–it is the effect of an evil person rather than a supernatural event that drives events in Harry’s life. It’s worth noting that for his part, Harry consistently chooses on his own to face the enemy. He never, in the end, avoids the confrontation or gives up. The vehicle of success, redemption, and goodness throughout the series lies not in magic or destiny but in moral choices.

I mentioned earlier that the theme of the series was Love. It runs through all these larger plot points and events as the source of all good relationships, and it drives the events because of the extraordinary friendship Harry shares with his friends. Their continual success against Voldemort is directly attributable to their combined efforts, which springs from their care for one another rather than merely shared purpose. And, in the end, it is only Harry’s Loving decision to give his life for his friends that enables him to finally defeat Voldemort. Literally and figuratively, Love conquers the death, fear, and despair that Harry and his friends must face throughout the series.

Rowling also deals heavily in the theme of redemption, which surfaces quietly in the early books–think how Sirius redeems his dark, evil family through his service and friendship to Harry–and becomes inescapable in the last. With the exception of Voldemort himself (and his particularly evil henchmen), every “bad” character to some measure redeems himself–Malfoy, a bully with a particular hatred of Harry, has the grace in the end to turn his back (however halfheartedly) on Voldemort, and to quietly allow Harry to save his life. Percy Weasley, who disowned his family to serve his own ambition, apologizes and returns to their side in the final battle. We learn that Dumbledore, perhaps the most staunchly good character of the entire book, was in fact tempted by Dark Magic early in his life, though he obviously repented early enough to discover Voldemort and set up his demise. But it is in Professor Snape’s story that we see the most redemption: the touching and powerful tale of a man who loved Lily so much that he could protect and aid her son even though that son looked like his father, the man Snape (perhaps) hated most in the world.

Although not revealed until later in the series, it becomes clear from Snape’s interactions with Dumbledore (seen in the memories he gave Harry immediately prior to his death) that much of his cruelty at Hogwarts was an act to lend verisimilitude to his allegiance with the Death Eaters–he shows his real colors when he corrects one of his portrait-henchmen at Hogwarts from using the equivalent of a racist epithet: “don’t use that word [“mudblood”]!” Also, he aids Harry throughout the series: early on by attempting to foil the curse of an unseen enemy during a quidditch game; then by trying to teach Harry the difficult art of Occlumency; and especially in the last book by sending his patronus to the wood to lead Harry to Gryffindor’s sword. Snape was a bitter, lonely young man, desperate to fit in and be liked, a tremendously competent wizard, and for all of these reasons sorely tempted by Dark Magic–a near perfect prospect for the Death Eaters. Yet he was redeemed by his love of Harry’s mother to the point of fighting thanklessly throughout the entire series to protect her son and defeat Voldemort.

The Harry Potter books are ultimately ennobling. They teach, entertainingly, that doing the right thing, sticking together with friends, and confronting evil when necessary will lead to a “happily ever after.” But the stories are much richer than a tale of good triumphing over evil. Each one is a cleverly constructed mystery novel, wherein the mundane details the characters’ lives (which are intriguing because we come to love the characters so much) conceal vital clues to the overarching problem of the novel, and many characters are not what they seem–an example of this is the case of Sirius Black in The Prisoner of Azkaban. Each book by itself is also a bildungsroman, like (as already mentioned) the entire series, in which Harry, Ron, and Hermione (and to a lesser extent Neville, Ginny, and Luna) grow up and become more complete persons. Indeed, part of their attraction to us as characters is their endearing and familiar adolescent struggle to like themselves, to gain friends, to fit in, and to succeed. Finally, by embedding the magical world in our own, Rowling has also added witty, amusing, and sometimes devastating satire.

Great literature addresses the great questions of humanity, such as why we exist, what we should do, and how we can be happy. Rowling has offered a compelling answer to these questions through the Harry Potter books. Along the way, she has crafted seven exciting stories that are introspective, funny, tragic, affirming, and ennobling. Her books, though perhaps not as profound, yet stand comparison to The Chronicles of Narnia, and The Lord of the Rings. They are a valuable addition to the canon of English books, and they deserve better than a reduction to “children’s literature,” “young adult literature,” “fantasy literature,” “popular literature,” or any other kind of sub-category. They are simply Literature (with a capital “L”).

On Freedom and Predestination

When Christians talk of freedom, they often phrase it as freedom from sin or death–sometimes more poetically as freedom from the slavery of sin or death. This is not an open freedom; it implies no license. In other words, Christians are not, in fact, free to do as they wish. St. Paul cautions, “Christ set us free; so stand firm and do not submit again to the yoke of slavery…do not use this freedom as an opportunity for the flesh…you may not do what you want” (Gal 5). Christians are offered personal freedom only in the sense of making a choice between a “yoke of slavery” to “the flesh,” and something else. That “something else” is a release. It is the freedom Christians believe Christ won for humanity: the freedom from death and their own sinfulness. It is the freedom to be, each individually, as God created us.

Because the freedom we are used to talking about — the freedom to do as we wish — is much broader, it is perhaps difficult to understand why Christianity would narrow the possible choices of action down to a simple duality: Christ (and the freedom He offers), and death. But in this distinction Christianity is consistent, because Christianity teaches that God created us in His image and likeness to be His free lovers and servants. To do anything else is to reject God. There are only two choices — God or not. Every action we commit is by the Grace and to the Glory of God (i.e. selfless, loving, and joyful) or else is selfish and destructive.

Understanding such a stark choice brings up, inescapably, the issue of predestination. Of course we are destined for God; He created us for Himself. His plan for us since the very beginning is that we find our way to Him of our own free wills. It would be then correct to say of a man who goes to heaven, “he was predestined for it.” All humanity is. But the criteria for getting there in the first place is the exercise of our free will–we are each responsible for choosing God ourselves. C.S. Lewis captures this idea very well in his book Perelandra, whose protagonist Dr. Ransom has decided to do “the right thing” in a critical moment despite his fearful (and selfish) protests:

“You might say, if you liked, that the power of choice had been simply set aside and an inflexible destiny substituted for it. On the other hand, you might say that he had [been] delivered from the rhetoric of his passions and had emerged into unassailable freedom. Ransom could not, for the life of him, see any difference between these two statements. Predestination and freedom were apparently identical.”

I believe that we cannot be predestined to hell. That would infringe on our freedom of choice. It is, rather, our path to heaven that is predestined. When we do what is right–defined, perhaps, as what is both good and necessary, according to our best intention and reflection–we are doing no more than that which God predestined us to do when he “called us by name” (to quote Isaiah). Though we may choose “not God” by doing something selfish and easy, or hurtful — an inherited tendency ours explained in the narrative of the expulsion from the Garden of Eden — God has made each of us only one path to Him, for He created us. One person’s calling is not another, and though they may be guilty of the same sins, their redemptions are going to be as individual as they are. Perhaps this is what scripture refers to when it speaks of “the Elect:” those who succumbed to their destiny or enacted their freedom to choose God (take your pick). Those who don’t are exiled from heaven.

A clue to what they have lost is found in Lewis’ pregnant phrase, “the rhetoric of [Ransom’s] passions.” The word “rhetoric” means “manufactured nobility or grandeur,” and the classic art of Rhetoric was taught to politicians so they could inspire others to their cause. We all have a tendency to think of ourselves with ‘nobility’ and ‘grandeur,’ imagining our needs, wants, and opinions to be so important that we forget to love and enjoy what is around us. This is the sin of Adam: wanting to elevate himself and doing so by seeking what was proper to God, or the knowledge of good and evil. The serpent deceived Eve in this using rhetoric, inspiring her to believe she could be like God if she ate the fruit of the tree of the knowledge of good and evil (incidentally, St. Augustine was once a teacher of Rhetoric, and his Confessions are filled with contempt for that art which teaches men seduce others with good-seeming words). In this phrase Lewis alludes to the human tendency to to let their passions run away with them in a way that is actually harmful, something that is a result of Original Sin. For example, it is natural to find a member of the opposite sex attractive, but following that passion into adultery is clearly wrong.

The faculty by which we regulate our passions is our reason. We have the ability to rationally decide if any given passion is Good or Bad–whether a particular passion is bringing us closer to God (love for a family member, perhaps, or charity for a stranger) or separating us from him (excessive ambition or a desire to hurt another). The ancient definition of Man (from Aristotle) was a rational animal, a creature subject to physical instincts and passions yet endowed with reason for free will. The essence of humanity, then–what it is that separates us from other physical creatures–is our reason, and our unique place in God’s creation as the creatures of His image. To abdicate reason in favor of passions is to reject God’s call, and therefore one’s humanity. Lewis speculates on this again through the thoughts of Ransom:

“Up till that moment, whenever he had thought of Hell, he had pictured the lost souls as being still human; now, as the frightful abyss which parts ghosthood from manhood yawned before him, pity was almost swallowed up in horror–in the unconquerable revulsion of the life within him from positive and self-consuming Death… The forces which had begun, perhaps years ago, to eat away his [enemy’s] humanity had now completed their work… Only a ghost was left–an everlasting unrest, a crumbling, a ruin, an odour of decay.”

Understanding this relationship between passions and reason sheds light on the Christian definition of freedom as freedom from the slavery of sin and death. To be free is to choose God’s path, as best as our reason allows us. Following one’s passions into choosing anything else is leads to error and sin.

Our only hope for everlasting life is to assume the mantle of full humanity: not an indulgent understanding of “human weakness,” not a claim to an unrestricted lifestyle, but a responsibility to choose God–and His specific and individual destiny for us–over every other option, and thereby be free.