Tuesday, November 29, 2011

5: Better the Devil You Know (Part I)

“How does it feel to be wrong?” asks Kathryn Schulz to her audience at the TED2011 conference. The responses she gets are “dreadful”, “thumbs down”, “embarrassing”. Then she reveals the trick: as she had expected, her audience unwittingly gave her answers to a different question: how does it feel to realize you’re wrong? (Schulz)

As Schulz explains it, being wrong, in itself, doesn’t feel like anything (or rather, it feels the same as being right). We don’t have any innate mechanisms built in ourselves to tell us that we’re wrong (Schulz). That’s why it can be devastating when we do finally realize it—we’ve already run a mile with that undetected mistake out into the world before the world lets us know.

Schulz argues that, faced with the painful possibility of discovering too late that we are wrong, we shore up our defences by becoming perfectionists and over-achievers who are ostensibly infallible, and then we just insist that we’re right all the time (Schulz). To me, this is the most glaring deficiency in Schulz’s talk. In making this assertion, she fails to acknowledge that we all avoid being wrong far more than we persist in being right. The problem is not that the prospect of being wrong makes us try too hard; it’s that, most of the time, it makes us scared of trying at all.

To be continued…

Sunday, November 27, 2011

4: What it Feels Like

“There are certain things in a man’s past which he does not divulge to everybody but, perhaps, only to his friends. Again there are certain things he will not divulge even to his friends; he will divulge them perhaps only to himself, and that, too, as a secret. But, finally, there are things which he is afraid to divulge even to himself, and every decent man has quite an accumulation of such things in his mind.” – Fyodor Dostoevsky, Notes from Underground (Dostoevsky 144)




Works Cited

Dostoevsky, Fyodor. The Best Stories of Fyodor Dostoevsky. Trans. David Magarshack. Toronto: Random House of Canada, 2005. Print.

Friday, November 25, 2011

3: Recurring Themes

Know thyself.
– Ancient Greek aphorism

Si fallor, sum (If I err, I exist)
– St. Augustine, City of God (qtd. in Potter 18)




Works Cited

Potter, Vincent G. On understanding understanding: a philosophy of knowledge. New York: Fordham University Press, 1994. Google books. Web. 25 Nov. 2011. <http://books.google.ca/books?id=SnO1FKnJui4C&dq=city+of+god+st+augustine+fallor+ergo+sum&source=gbs_navlinks_s>

2: Dropping the Bomb on Wrong

My research on being wrong is partly inspired by Kathryn Schulz’s presentation (also entitled “On being wrong”), which she gave at the TED2011 conference. As a self-proclaimed “wrongologist”, Schulz has spent years thinking about fallibility. Her talk, which serves as a rough introduction to her discipline, is one that I will be returning to as a springboard for introducing a variety of concerns that she has already identified.

Before I get too far ahead, however, it seems like a good time to address why such a topic is important. Because, for all the work she does convincing us that we are all “trapped in this little bubble of feeling very right about everything,” Schulz fails to give anything beyond vague optimism for why we should step out of that bubble, and what the world would look like if we did.

One online viewer took issue with the way Schulz glossed over how mistake-making manifests in reality. “George Bush thought he was going to invade Iraq, find a bunch of weapons of mass destruction, liberate the people and bring democracy to the Middle East, and something else happened instead,” Schulz cites, with a shrug, as one example (Schulz). The viewer rebukes: “Well, yes we need the mistakes to learn. But what of it? What happens when there is no accountability? Do we just need to forgive and forget? What does that imply for our society?” (Liu). He likely understands that this is not Schulz’s intention, but says so to illustrate that Schulz tends to overemphasize the admissibility of our errors at the expense of suggesting practical methods for dealing with them.

***

Let us, in fact, extrapolate Schulz’s ideas in the case of the Iraq War. This was the topic of a recent interview between television host Jon Stewart and former United States Secretary of State Condoleezza Rice. In the interview, Rice makes a strong case that Saddam Hussein had, by 2003, proven to be an imminent international security threat that needed to be removed. Yet Stewart is unsatisfied with how the administration chose to convey that threat to the public. “I think the complaint people have,” explains Stewart, “is that the administration was very efficient at selling us on the idea of the danger [Hussein] presented, with very specific thoughts of weapons of mass destruction, which turned out to be not the case” (Rice).

Stewart doesn’t believe that the administration simply made a mistake, but rather that the “conversation they had with the American people was not necessarily an honest one” (Rice). His argument is that if we can, for the moment, take Rice’s statements in the interview at face value—that the administration had a real, justifiable case for taking action against Iraq which they could’ve presented to the public—then why did they concert a rhetoric based on unconfirmed WMDs instead?

The administration construed the WMDs as a material certainty; this certainty would’ve been impossible to establish with any subjective measurement of Hussein’s Iraq as a “threat”. That the Bush administration used a strategy favouring the assurance of certainty is telling: it suggests that political discourse is the way it is—distorted, disingenuous, and dishonest—because the speaker assumes that the listener is incapable of making judgments based on complex information. In essence, the speaker therefore fakes a false certainty. In turn, a public that is continually fed these neatly packaged statements of certainty loses their faculty for dealing with complexity, thus perpetuating a cycle that leads to a breakdown of meaningful discourse.

This is really about our expectations of rightness and wrongness. We expect the politician to tell us definitively, “we need to attack Iraq because they have the bombs to attack us,” rather than to say that an invasion of Iraq is a desirable strategy, given a weighing of the advantages and disadvantages using the information available. I am trying to imagine that, if we lived in a culture where we acknowledge fallibility as a norm, the latter statement is something that the politician could’ve presented and that the public could’ve processed. Deception on the one hand, and non-confidence on the other, could both have been avoided.

In the end, the outcome of the war might’ve been the same—except with a much better approval rating. “I’m very regretful about the intelligence [report on WMDs] and the fact that it was not correct,” says Rice. “I am nonetheless very glad that Saddam Hussein is gone, because you would not have the Middle East that you are seeing unfold now” (Rice). She is expressing that a good can result from a wrong decision—and that is something that no one can fully predict. If this is really the case, then why not dismantle the false pretence that we are in possession of perfect information that allows us to make perfect choices, and instead focus on how we can work with what we have, under the circumstances, to the best of our abilities?

***

I think this is what Schulz is getting at. Of course, with only eighteen minutes to speak at the podium, she is understandably brief about it. Her first priority is to stress to the audience that fallibility is real, and she has me convinced. In fact, I’m starting to see it everywhere, and it becomes more amusing all the time. The television interview, for example, closes with Stewart appearing to, quite aptly, poke fun of the polarizing effect of wrongness:

Stewart: And you are with me in thinking that I’m also right?

Rice: Jon, both sides can’t be right… (laughs)

Stewart: Alright, well I really appreciate you coming by and talking, and I do feel like I have a better sense of why we did make that huge mistake.




Works Cited

Liu, Jeff. Comment on “On being wrong.” TED: Ideas worth spreading. TED Conferences. 16 May 2011. Web. 25 Nov. 2011. <http://www.ted.com/talks/lang/en/kathryn_schulz_on_being_wrong.html>.

Rice, Condoleezza. Interview by Jon Stewart. The Daily Show. The Comedy Network. 1 Nov. 2011. Web. 23 Nov. 2011. <http://watch.thecomedynetwork.ca/#clip560829>.

Schulz, Kathryn. “On being wrong.” TED: Ideas worth spreading. TED Conferences. Apr. 2011. Web. 25 Nov. 2011. <http://www.ted.com/talks/lang/en/kathryn_schulz_on_being_wrong.html>.

Wednesday, November 23, 2011

1: They Say...

During a peer discussion of our projects, one classmate was quick to take objection to my proposed research. “Are we actually wrong,” he challenged, “or are we just not completely right?” What he wanted was a distinction between absolute and functional levels of correctness, and in turn, the concession to be made that we can be “right” in a practical sense.

I find his comments interesting, and relevant, for several reasons:

i.) They point out that the terminology I am using is vague and may be dangerously contentious. At first, I was annoyed with this student’s insistence on debating semantics; only afterwards did I realize that I was operating with my own working definitions in mind, and that they weren’t available to the reader.

When I contend that an individual is “wrong”, I am trying to say that the individual has taken up a position that is still in a state of flux, and that cannot answer definitively to all challenges. Such properties do not apply to a true mathematical equation, such as 2 + 2 = 4, which remains constant and unequivocal. It would, however, be fair to say that an individual who believes first in pro-life, then later in pro-choice, was wrong in the former perspective on his own terms because his latter perspective repudiates it. Furthermore, either stance is also wrong on collective terms because each stance carries with it unresolved problems.

Most of our arguments, even if we do not make 180 degree turnarounds, are subject to continuous amendment. As such, they are rarely complete, infallible statements. This level of uncertainty, which is all that we have to stand on, is what I have chosen to call “wrong”. It is problematic terminology, but I must stress that I am more concerned with the qualitative state of mind that this “wrongness” occupies, rather than a clinical notion. “Wrong” and “right” will have to do for now, at least. Perhaps the research that follows will present a word that better fits my definition.

ii.) This is an instant example of how questions can so quickly snowball and become the kind of overwhelmingly huge topic that I alluded to in the proposal. The above clarification is a desperate attempt to curtail such hazards; just barely suppressed by my ad hoc definition are grand notions like morality and universal truth. How can I speak of being wrong when I am ignorant to what is true?! They beg to be addressed; they mock me and write off my research as amateurish at best, and hardly airtight in the face of philosophical logic.

My response: stick to the plan…

iii.) The student’s reaction tells me that my research strikes a nerve with our collective consciousness. We tend not to want to concede that we’re living, breathing bundles of erroneous thought—we want credit for all the things we’re educated on and knowledgeable and wise about. The contention that we may not be so knowledgeable and wise after all about the things we’re best at may even be a bit offensive. This cultural reaction is exactly what I’m proposing we critique (put on the chopping block?)—so I’m glad things are off to a good start.

Tuesday, November 22, 2011

0: Proposal

I have spent a great deal of this semester wrestling with the notion of fallibility—that is, our human tendency to be wrong, and what we, individually and collectively, should be doing about it.

I am made increasingly aware of my own capacity for being incorrect with every successive school assignment. Whatever the class, it’s usually the same drill: take a topic and, with a two-week turnaround, produce five to eight pages of lucidly articulated arguments. That’s two weeks to pick out the question, absorb the literature, boil it all down, and finally, choose a side and speak for it.

What ensues is a tumble down the rabbit hole into the chaotic realm of critical research. Given any starting point, one answer leads to another question; one perspective is invalidated by the next; one isolated concern points to a variety of complex causes. I am so often overwhelmed by the realization that whatever I can absorb is only a sliver of the pie. What if I’m missing something important in the equation? What if I’m misinformed? What if I’m wrong?

Yet it would be false idealism to assume that, free from the constraints of the school assignment, I could somehow procure every piece of information I need to come up with the correct conclusion. A research thesis is, after all, merely a microcosm of the decisions we must continually make in real-time. When word broke recently of the Howard Windsor appointment and the NSCAD union negotiations, for example, students, faculty and the public alike scrambled to understand the situation, sorting through a torrent of partisan announcements, media reports, hearsay and rumours. Each of these channels of information was biased, incomplete, imperfect. With only such resources to draw from, it follows that our own perspectives must be equally imperfect.

It would be impossible to put the world on pause while we analyze all the pieces in some clinical fashion that arms us for a perfect discourse. Most problems are volatile and evolving—they evade our best attempts at pinning them down. And, even if we could, would it really allow us to make infallible decisions?

The proposed body of research will examine the problem of fallibility and its place in our society. More than just the notion that we are often wrong, I am interested in our culture’s tendency to deny or avoid such a perspective. This tendency, as a result, shapes the ways we present ourselves and engage with others. My research will attempt to uncover where this ideology comes from, how it affects us, and how we can solve the difficulties it poses.

Clearly, my attempt at answering these questions is in itself a struggle against being wrong about them. Underlying this project are the difficulties I face in confronting my own fallibility, and my desire to find the means to overcome it.

My methodology for writing this report is informed by my very resistance to committing to words out of a fear of being wrong. In blog form, I will be publishing bite-sized portions of research and analysis as I proceed. Each entry will respond to only one or two research sources, and thereby necessarily present in-progress arguments, made without the luxury of the full picture, subject to continuous amendment. I see this as a mechanism that forces me to not so much negotiate the perils of imperfect information, as to accept them. It is, at the same time, a mechanism that grants me permission to be wrong—the potential for mistakes is laid bare by the process. After all, this methodology already exists in our minds—it is only out of our anxiety for correctness in the public sphere that, usually, the formative process of arriving at our ideas is banished to obscurity.

It is my hope that this project, through both content and process, achieves the dismantling of the ideology of wrong that binds us to an overriding concern for being right; and in its place, presents new approaches for confronting the questions that we encounter in our lives.