Photo: Jorgen SchybergI was badly injured in a bike accident on April 13. Eyewitness reports differ, but it’s likely that a car ran over my head, and it’s more than likely that the helmet clasped around my head saved my life.
The temporal bone on my head’s right side is fractured, with associated hearing damage. The right side of my face has palsied temporarily, leaving me with half a smile and a right eye that cannot wink or close by itself. Several notches in my spine were fractured, but the cord itself is intact. My left shoulder blade is broken.
As for the me that’s inside of all this, I feel both more afraid and more alive than I did before the accident. More afraid because the memory of the accident is so violent in the near rearview, and I can’t help but wonder if violent memories will shadow me as I try to move forward. By doctor’s orders, I must avoid any activity in the next six months that would risk more head trauma. After that? I imagine I’ll be ready to engage enthusiastically with the world — to go outside and do all the things I used to do, freely and unshadowed. It’s what my heart longs for today. But what my heart fears is another violent episode that injures my optimism and openness to life and its possibilities. I feel fragile.
But I also feel more alive. I can’t help but think that the alternate-universe Will Craven who rode home that evening unscathed knows less than the me in the here and now. He’s still flying blind out there, on cruise control and dulled by assumptions.
I arrived at that notion with help: While in the hospital, I read the fantastic April 2011 Atlantic cover story about an annual contest in which engineers compete to design robots so “human” in their ability to converse that a panel of judges will be convinced that they are chatting online with a real human — an entity chatty, whimsical, opinionated, and rhetorically free enough to twist with the winds of conservation.
Author Brian Christian is initially taken with one computer program’s uncanny ability to send sarcastic retorts to anything its human chat partner sends its way. But with a little investigation, Christian learns that of the many human styles of communication, the jaded verbal volley is one of the shallowest, and is one of the easiest guises for robots to replicate. That is, in a contest to explore what is human, the flippant cynic does not set a high bar for the species:
Once again, the question of what types of human behavior computers can imitate shines light on how we conduct our own, human lives. Verbal abuse is simply less complex than other forms of conversation. In fact, since reading the papers on MGonz [the computer program], and transcripts of its conversations, I find myself much more able to constructively manage heated conversations. Aware of the stateless, knee-jerk character of the terse remark I want to blurt out, I recognize that that remark has far more to do with a reflex reaction to the very last sentence of the conversation than with either the issue at hand or the person I’m talking to. All of a sudden, the absurdity and ridiculousness of this kind of escalation become quantitatively clear, and, contemptuously unwilling to act like a bot, I steer myself toward a more “stateful” response: better living through science.
It’s in this way that I think some of my less human tendencies were on cruise control prior to the accident. To take things for granted is to surrender to your more automatic tendencies, your more robotic tendencies. In the article, Christian writes about how he ends up as a part of the contest when he’s invited to serve as one of the true humans that the robots must compete against. Though the contest organizers tell him there’s little that he can do to prepare, and that as a human he can simply act naturally, he disagrees. He decides that being human is something we can work on, and do better. I agree.
Finally, I know now that the clichés are true: Following these types of accidents, you realize what’s actually important, and what isn’t. You understand what’s truly scary, and what you only thought was scary. I work in the communications department of ForestEthics. And like a lot of Americans, my self-esteem is very much tied up in my work. I’ve feared failure. What we often describe as stressful aspects of our jobs might also be described as fearful parts of our jobs, a bundle of longings and anxieties tumbling downhill against the clock.
ForestEthics Executive Director Todd Paglia and I began the new year by recording a conversation in which we talked about the organization’s history and our work. I asked him about what I perceived to be considerable pressure and anxiety-inducing risk inherent in his job managing the ambitions, publicly confrontational nature, and finite funding of a small organization:
It’s not real risk. Real risk is you’re poor, you have nothing, you might not be able to feed yourselves. That’s real risk. I feel like a lot of the stuff that other people feel is risk, like taking on a bunch of the companies that we’ve taken on — you know, you have butterflies in your stomach, but I’m pretty sure they’re not going to kill me. So it doesn’t feel like risk to me. It feels like, that’s fun. That means I am being who I’m supposed to be.
I get that now. My accident was an example of frightening risk up close. ForestEthics’ chief concern — the destruction of the world’s forests and its impact on the quality of life on Earth — is a serious present and future risk. Life will get progressively shittier if we allow our forests to be destroyed for junk mail, virgin forest paper, and tar sands oil.
I love my work. It’s joyous work, no matter how challenging the circumstances. And I feel more alive, more human, now that I’ve seen real risk up close, and can tell the difference between the things that genuinely threaten life, and things that are merely part of its rich tapestry. I’m just happy to be alive. I may be more afraid now in a certain respect, but overall, I now have less to fear.
P.S. Wear a helmet.