Things That Go Bump in the Mind

Things That Go Bump in the Mind
Look for a new post every Sunday morning.

Sunday, September 30, 2012

Return to Plato's Cave

    I went to a formal dinner this week for my wife's work, and so I was sitting with some very nice people whom I was meeting for the first time, and naturally, one of the first topics of conversation that came up (as typically happens with strangers) is “what do you do?” I admitted to being an English teacher – which can be risky, because for a lot of people when they hear I am an English teacher, they immediately start to picture me in a grammar nazi uniform. As with most other professions, there are a few bad apples who give the rest of us a bellyache, but the vast majority of English teachers I know are smart enough not correct other people's English in social situations. Not only is the practice of correcting other people's English impolite, it demonstrates a serious misunderstanding regarding the existence of “Standard English.”
     Truth be told there is no single, monolithic “Standard English” but a wide range of Standard English traditions and conventions that need to be adapted depending on where you are, who you are with, and the purpose of your conversation (or writing). The ancient Greeks had an excellent term for this: kairos, which (like most English words) had a variety of meanings, but the one I am referring to meant “a supreme awareness of time and place.” In other words, kairos is the awareness of where you are, who you are with, and what is going on. Teachers who correct other people's English outside of the classroom seem to have an underdeveloped sense of kairos; they are missing an important connection between what they know and how it applies to the conversation at hand. One of the alternative meanings for kairos was “weather” and this, of course, makes a lot sense; anyone who is intelligent enough to know how to dress for the weather should be smart enough to adapt their language for their social climates as well.
     After I confessed to being a writing teacher, a medical doctor who was sitting at the table asked about my Ph.D and my specialization within the field of English. I replied by saying that I wrote my dissertation on the topic of rhetorical theory, and that as an academic, I consider myself a rhetorical theorist. For a brief moment I imagined I could hear the wheels spinning in other people's heads. It was like the sound of milk hitting the Rice Krispies.
     When the medical doctor stated he was surgeon, everyone at the table had an immediate sense of what he does – he uses scalpels to fix people. When I said I specialized in rhetoric, I think the people at the table didn't know what to think. This happens often. Whenever I say I specialize in rhetoric, I get the feeling that many people wonder if I am making it up as I go along. The simple reason for this is lots of intelligent and educated people have never heard the word “rhetoric” or have no memory of hearing the word. Furthermore, for perhaps the majority of the people who do have some mental association with the word, the only time they have ever heard “rhetoric” used has been from politicians and newscasters using the term as an insult (“You cannot trust anything my opponent says in this election; everything he says is just the rhetoric of empty promises). In other words, many people associate the term rhetoric with “whatever people say to get what they want when they really don't mean it.”
     No wonder when people hear I am a rhetorical theorist that they wonder if I'm either a charlatan or someone who specializes in teaching others to be charlatans. But, seriously, folks, I am not a charlatan (I can almost hear Richard Nixon's voice in my head saying “I am not a crook” when I write “I am not a charlatan”). In fact, what drew me into the field in the first place is the discipline's fundamental concern with connecting truth with advocacy. If there is anything reverential regarding the study of rhetoric, it is the understanding that truth itself is sacred. And here lies the paradox: rhetoricians hold truth to be both sacred and suspect. Perhaps the most essential problem for rhetorical theorists is explaining how truth can be both something we need to revere and something we need to challenge.
     Rhetoricians in the time of Plato were known as “sophists.” Sophists were traveling teachers who helped people understand how to make better arguments because in those days people could not just hire a lawyer to argue on their behalf, they had to do it themselves. Plato, who as a philosopher had little regard for the sophists, accused them of “cookery.” By this, Plato meant that if an argument were a stew, then the Sophists were more concerned with how they flavored their ideas in order to make them appealing to others than they were with serving ideas that were wholesome and sound to begin with. In other words, Plato believed that truth did not need to be sugar-coated in order to be accepted.
     The problem with Plato's perspective, according to the Sophists, is that just as there are different types of ingredients that you may want to put into or leave out of a stew, there are different types of truths as well. For Plato, the only truths that mattered were the ones that would be just as true a thousand years from now as they were a thousand years ago (you could call them “perennial” or “eternal” truths). While the Sophists understood Plato's commitment to his type of “philosophical” truths, they also valued the type of “circumstantial” truths that would help people decide whether to invest in a higher wall around the city or spend the money instead on better weapons for the soldiers. Truth, we learned from the Sophists, is almost always contextual, and while people can certainly be shamelessly deceptive by leaving important details out of their arguments with others, the impossibility of including every relative detail precludes the belief in ever getting to the “whole truth.”
     One of Plato's most important lessons comes from his dialogue “The Republic.” Here, Plato (through the character he created from his own teacher, Socrates) tells a parable of people who grew up in a cave, chained to chairs, and forced to watch shadows on the cave's wall. For these people, Plato argues, reality is the shadows they see. Reality then is nothing more than a conflation of illusions we have perceived with our senses. One day in the cave, an extraordinary person is able to free himself from the chair and make his way from the cave. Outside for the first time, the escapee sees the earth for what it actually is. The light of the sun reveals a reality unknown to anyone raised to believe in mere shadows. Plato, of course, meant for the escapee to be a metaphor for the philosopher who can see beyond the illusions of ordinary thinking to witness the true reality that lies beyond. In Plato's story, eventually the escapee decides he has a moral duty to help his fellow citizens escape the cave as well and returns to them to show them how to free themselves from their chains and see beyond the shadows they consider to be reality. The story ends with the citizens killing the escapee because he upset them too much with his talk of an alternative and better reality.
     Here is, then, my explanation for what “rhetoricians” or “rhetorical theorists” actually do. While I am willing to go along with Plato that perhaps it takes a “philosopher” to escape the cave, it takes a “sophist” to convince others to leave the cave as well. If you understand the heightened reality outside of the cave, then you better understand the kairos of your audience inside the cave. This understanding is so important that it is sometimes literally a matter of life and death. In other words, it is not enough to know more than others, it also requires understanding what it takes to explain it to them as well. If not, then you might as well keep your funny ideas to yourself.
     Keep thinking rhetorically, readers, and I'll see you again next week.

Sunday, September 23, 2012

Ouija Boards: "Yes Yes" or "No No" ?

(The headstone of Elijah Bond)

    Edgar Allan Poe, arguably one of the most famous writers in American history, only wrote one novel, The Narrative of Arthur Gordon Pym of Nantucket. Published in 1838, this novel relates the story of the sea adventures of a young man, Pym, who survives several close calls with death including a mutiny, a shipwreck, a lifeboat, and a tribe of treacherous natives. Although the book was presumably hugely influential on the writing of Herman Melville's Moby Dick and Jules Verne's Twenty Thousand Leagues Under the Sea, the novel is not read much today and is often not even mentioned in English classes studying Poe's short stories. Poe, himself, did not care for the novel and even referred to it later in his life as “a very silly book."
     In one episode of the book, Pym is left stranded with three other people after their ship had capsized in a storm. After many days awaiting rescue without sufficient food or water, Pym and his shipmates agree to cannibalize a cabin boy named Richard Parker. In 1884, forty six years after the publication of Poe's only novel, a ship capsized in a storm and left without enough to eat or drink, the four survivors chose to eat one of their own in order to survive, a cabin boy named Richard Parker.
     Although the coincidence between the story Poe wrote in his novel and the story of the real life cabin boy with the same name seems pretty far-fetched, it serves to underscore the fact that strange occurrences do happen, and they happen more often than we probably imagine. Given that billions of people have inhabited this planet throughout history and that billions of people continue to inhabit this planet today, even when the odds of something occurring are extremely remote, say the odds are only one in a million, the chances are pretty good that some random person will be there to take notice and tell others about it. And this, of course, leads to a very interesting rhetorical paradox: we understand on an intellectual level that from time to time very rare and random events are going to happen, but the mere acknowledgement that unlikely flukes happen is not enough to convince us to believe in the nearly impossible when someone tells us it has happened to them.
     The problem, of course, is that whenever people tell us that they have witnessed firsthand something with only a remote probability of being true, we have to weigh the odds of the occurrence being true against the likelihood of the story teller having some motive to be lying. The credibility we afford others sometimes comes at the price of our own credulity. Not every far-fetched story is the truth; not every seeming whopper is a lie. The aptitude for discerning the difference between a trustworthy person telling us something the sounds incredible and a liar exploiting our desire to believe in something interesting and remote is a verbal skill that has challenged rhetoricians for more than 2,000 years.
     Although Plato taught his students (including Aristotle) that rhetoricians were more interested in winning arguments than discerning “true belief,” Aristotle diverged from his mentor and taught his own students that rhetoric had a legitimate role in helping people make up their minds when dealing with understandings that lie beyond what could be proved through scientific demonstration or philosophical fiat. Aristotle divided the way people devised to convince others into three basic modes: the use of logic (logos), the use of emotions (pathos), and the use of authority (ethos). When arguing through either logic or affection, the credibility of the speaker is not the primary engine that powers the persuasion; for Aristotle, the issue of trustworthiness emerges from the integrity the speaker develops while talking with his audience. Since Aristotle's time, rhetorical theorists who have studied and reflected upon the power of ethos have learned to take the writer's (or speaker's) previous reputation into account when considering how credibility is molded by what different audiences thinks of them.
     When people argue something from a position of authority, ethos typically works as a shortcut around the logical. This is to say, that if you trust in someone's authority, then you do not need their evidence or reasoning to believe what they are telling you. When you go to the doctor, for example, you might ask for an explanation for how she came to her diagnosis, but you are just as likely to assume that given your trust in your doctor's experience, you do not need to know how she came to her conclusions about your condition.
     Unfortunately, in English (and many other languages for that matter), we do not have a good vocabulary for distinguishing between the type of authority that arises from expertise (such as a medical doctor) and the type of authority that arises from power (such as an employer who has the prerogative to fire you if you do not go along with what they are telling you). Power and expertise are not, of course, mutually exclusive and frequently people have an odd combination of both (such as a judge who is both an expert in the law and who holds the power to put you in jail).
     As mentioned before in this blog, there is an important difference between having a reason to believe something and a motive to believe something. If, for example, your employer asks you to do something at work that you suspect is illegal, you have a motive (to keep your job) to believe your boss if she insists that what she's asking you to do is legal, but you may not have a good reason to believe her (based upon your own judge about what is and is not against the law). Just as we all recognize when someone is telling us something that sounds far-fetched could be a complete fabrication, we also need to recognize that sometimes people with power use that power to benefit themselves, and some authorities (especially those whose influence originates in power than expertise) have no more regard for the truth than the storyteller who relies upon our gullibility to go along with a shaggy dog story.
     Who we choose to believe must come from a persistent consciousness of the factors that make other people credible. Whenever someone is telling us something that goes against what we think we already know (“Wow, that sounds pretty unlikely” says the voice in our head), then we need to consider both what we already know about that person (not just, perhaps, what merely makes them popular, wealthy, or famous but the aspects of their character that would strengthen our perception of their integrity) and what we might suspect their motives are for telling us the information that runs against our own experience or common sense.
     In returning to my weekly theme of considering the nature and reality of the paranormal or the supernatural, let's see how this type of rhetorical thinking works with Ouija boards. Today Ouija boards are a trademarked product of Parker Brothers (a subsidiary of Hasbro), which is the same toy and game company that makes Monopoly, Clue, and Sorry!. Ouija boards, however, have a long and complicated history dating back to the late 19th century, and about the most we can say definitely about their origins is that the first person to patent the Ouija board was an attorney named Elijah Bond. How much Bond actually had to do with the creation of the board is a matter of wide speculation since other “talking boards” or “seance boards” had been around for a least twenty years before Bond secured his patent in 1891. Although other companies had much greater success marketing Ouija boards than Bond's own company, it is amusing to note that Bond's own company was driven out of business due to the unfortunate associations that came to be attached with his company's logo decades after he started his company in 1907; Bond's company was called The Swastika Novelty Company and, yes, its logo was a swastika. If there were any real prophetic powers to a Ouija board, you would think that Bond and his coworkers would have seen that coming.
     Today's Ouija boards glow in the dark, an innovation that in my humble opinion both adds to the spookiness of the “game” by allowing participants to sit in even darker rooms while still being able to read the messages and subtracts from the traditional charm of the aesthetics of the plain wooden board covered in letters, numbers, and simple “yes” or “no” answers. It is interesting to me that even the manufacturers of Ouija boards (whose name is supposedly derived from a combination of the French and German words for “yes”) find it difficult to describe playing with the occult board and planchette as a “game.” What other games can we think of have no scoring, competition, or even defined rules about what constitutes the end of the game? Presumably, participants know when a “game” is over when the board tells them it's had enough, but that kind of activity is, at least for me, difficult to classify as a “game.”
     Here, finally, is the rhetorical point I wanted to make about ethos and Ouija boards. Putting aside for a moment the issue of whether the boards are actually channelling spirits from the Great Beyond or are more likely the product of the participants subconscious intentions to freak each other out, let us – for the sake of argument – assume we somehow are indeed “talking” with invisible beings through these devices – why should we believe anything they tells us? Would you walk up to any random stranger (say at a McDonalds or a Walmart) and expect reliable information from them without knowing anything about them? Before we take to heart anything others have to tell us (whether they are “speaking” through a floating plastic disk on a game board, across a cash register, or via a platform at a political rally), don't you think it's kind of important to know what motivates their answers before you start letting them answer your questions?
     Keep thinking rhetorically, folks, and I'll be back next week. If somehow I get run over by a bus in the next few days, you still know how to reach me, but do me a small favor and ask for some ID, okay?

Sunday, September 16, 2012

Staying up with the Monsters of Midnight

      When I was a kid growing up in the 1960's, my parents (like most of the parents of the time, I suspect) had a non-negotiable bedtime policy. While I was in elementary school, if I got out of bed after nine o'clock on a school night, I had better be in need of medical attention requiring an emergency room visit or I suspected my parents would supply cause for such a need. Friday and Saturday nights were different, however. As long as we (and by “we” I mean some combination of brothers, cousins, or the ever-changing rotation of neighborhood friends who could get permission to sleep over) were not loud enough to wake the dead, we had permission to stay up late and watch the wonderfully terrible movies that were shown on late night television in those days.

     We had only three TV channels (and by “we” I mean everyone in the country in those pre-cable years), and the local station managers seemed to know full well their only audience after the news went off at 11:30 on Friday and Saturday nights were pre-adolescents who had a ravenous appetite for monster movies. It was almost like a nation-wide psychological experiment with school-age children: get youngsters to stay up late on weekends to watch movies that would have been only marginally scary in the light of day, but which somehow turned into major traumatic episodes once the parents were asleep – for the sole purpose of finding out if there actually is anything scarier on earth than waking up mom and dad. The question of which is scarier, the prospect of blood-sucking aliens or sleep-deprived parents, is not hypothetical to anyone of my generation; while atomically-mutated vampires could potential take one's soul, parents could make you go to bed. On Monday mornings, anyone on the playground who could not share details of the previous weekend's monsterfest risked being outed as someone who couldn't handle his Chiller Theater, and that, my friend, left you in the Elementary School Circle of Shame a meager one degree above bed-wetters and booger-eaters.

     Back in those days, it took anywhere from five to ten years for a movie to trickle into a late-night, weekend thriller spot. Horror movies of that era (of the late 1950's and early 60's) did not bludgeon their viewers with gore. Our parents went to sleep knowing we would not see anything inappropriate because they actually edited movies for TV in those days. We did not need to see ultra-realistic depictions of decapitations, mutilations, or cannibalizations to be frightened out of our wits; we knew full well the scariest moment in the evening was when, after the movie was over and the TV was off, we would have to turn the lights off and make a run for it to the bedroom. We did not have to see what was waiting for us in the dark; we had already imagined it.

     It is probably difficult today for younger people to relate to our fascination with these monster movies and horror films. But in those pre-Star Wars days, kids often spent their paper-route money on issues of a magazine called Famous Monsters of Filmland, and I knew more than a few kids in my neighborhood who used their birthday money to assemble, glue, and paint plastic models of Dracula, Frankenstein's Monster, The Wolfman, and The Mummy (all of which featured parts that “glow in the dark!”).

     Without realizing it at the time, I know now that those late night thrillers offered people of my generation an alternative education about how adults think and behave. Through these movies, we got to see (alone and on our own time) that adults not only screwed up from time to time (especially when it came to allowing all manner of animals from insects, to frogs, to even rabbits to mutate into humongous killing machines in the wake of accidental exposure to radiation following the testing of nuclear weapons), but that few adults knew how to survive in the face of supernatural danger they did not understand (“I can't believe he's actually going to go down into that basement,” we would say to each other; we were kids and we knew not to go into basement even if there was only a one in a million chance there was a dangerous monster down there). Sometimes those films taught us rather strange notions of a causal link between attractiveness and evil; I remember one film in which the only way to tell the good creature from the bad creature was the bad creature had crooked teeth (apparently in the theaters, the creatures had been different colors, but on my parent's black and white television, they were identical shades of gray).

     Undoubtedly one of the messages that somehow got passed on to us in movie after movie was the idea that while science was cool and could produce mind-boggling results, there is always a line you really do not want to cross if you are a scientist because too much science will drive a person insane. Suppose you're a scientist and you have just figured out how to make a ray that will turn you into a crazy 60-foot bald guy with one eye, what are you going to do? If you are like every scientist I ever saw on those Saturday nights in my childhood, you are going to wait until all of your coworkers (who, by the way, have warned you not to try out that ray) have punched out for the night, and then you are going to fire the ray up and turn it on yourself. So what if you have a beautiful, B-movie girlfriend at home who can barely contain her ample breasts in the tight sweaters she wears? This is science, man, and you owe it to the rest of humanity to find out what is going to happen when you turn the ray on yourself. Besides there just happens to be an army base a few miles down the road and those guys need some target practice with their useless surface to air missiles. (Don't worry, just because you are impervious to heavy artillery, does not mean you won't be stopped by a virus or a concentrated poison your girlfriend will trick you into drinking.)

     As an adult all these years later, I am still both fascinated and horrified at the prospect of scientific breakthroughs. When I compare the handheld calculators that first became widely available during my freshmen year of high school to the portable computer that is my current iPhone, it is easy to get seduced into thinking that eventually scientists who are much smarter than I am are going to get around to solving the real horrors the human race actually face (such as feeding an overpopulated world or surviving the ongoing catastrophes of rapid environmental change). But, as much as I want to believe there are going to be solutions, I cannot escape the lessons of my early childhood which taught me to believe that with every innovation there are also going to be new problems, and sometimes those new problems can be more dangerous and more deadly than anything we have already faced.

     As I mentioned last week in discussing the relationship between genius and genies, science is terrific at dealing with physical realities it can measure and manipulate. But when it comes to making decisions about the appropriate ethical, moral, or political usage of their results, science is always going to come up with rays, and there's always going to be someone who will want to stand in front of them just to see what's going to happen next. Rhetorically, one of the common methods of persuasion is the “ethos” argument which relies upon the credibility of authority to convince others rather than through logic (“logos”) or passion (“pathos”). The lesson here is to consider if an authority's ability to convince is based in expertise or power because neither may be a good reason to believe someone if (a) the expertise being called upon is not within the domain of their experience (such as a scientist who is arguing politics) or (b) if the power to compel a behavior is not relevant to the argument for a belief (a threat, for instance, offers a great motive to do something but gives a terrible reason to believe something). I've run out of room for my weekly post so I'll go more into the depth of this aspect of rhetorical theory next week. (And for the people of my generation, that's “same bat-channel, same bat-time.”)

Sunday, September 9, 2012

Genius and Genies

     Very early in the 20th Century after handling the routine duties of his job, an obscure clerk working for the Swiss patent office sat at his desk one afternoon, and (as was his habit when he had time on his hands) he caught himself wondering once again about the speed of light. The idea that the speed of light was a constant 186,000 miles a second bothered the clerk in the same way a grain of sand will bother a pearl-producing mollusk. The clerk wondered what would happen if he were traveling at the speed of light, and he tried to look at his reflection in a mirror. Would the light traveling from his face have to travel faster than the speed of light in order for him to see himself? If he were holding the mirror in front of his face prior to reaching the speed of light, what would happen when he hit the speed of light? Would his face freeze in the mirror, disappear, or continue to look exactly the same? Nearly a century later, comedian Stephen Wright incorporated this same idea into his standup comedy routine by asking, "If you are in a spaceship that is traveling at the speed of light, and you turn on the headlights, does anything happen?"
     Working in the Swiss patent office gave Albert Einstein time to think and time to write about what he was thinking. In 1921, less than twenty years after sitting at his government-issued desk and pondering the nature of the universe, the power of his musings provided Einstein with a Nobel Prize in physics. In 1905, while working at an ordinary desk without any special laboratory equipment or machinery, Einstein was able to think his way through some of the thorniest mental challenges of theoretical physics and publish four academic papers that both revolutionized scientific understanding of light, matter, and energy, and established his reputation as one of the greatest scientific minds in human history. Today, more than 50 years after his death, Einstein's name remains a synonym for genius.
     As a reader, writer, and scholar, I admire Albert Einstein for a variety of reasons. Although this may sound counterintuitive, perhaps the quality I like most about his thinking is how much of it is over my head. I cannot even begin to predict how much of Einstein's theoretical explanations for the motion of small particles or the relative nature of time are beyond the capacity of my limited intelligence to understand them. I like science, especially physics and astronomy, and I have read a small bookshelf of texts written to explain quantum mechanics, string theory, and special relativity to curious (but mathematically challenged) science groupies such as myself. And this leads me to the rhetorical topic I'd like to broach in this week's post: the issue of expertise.
     If Einstein says that the speed of light is consistent, but spans of time and lengths are not, then I just have to take his word on that. From what I understand, the physics at the atomic scale just do not play by the same rules Newton came up with to explain the physics of our everyday, apple-dropping planet. When I read Einstein, I can get a fuzzy gist of his arguments, but the details are as indecipherable to me as Chinese astrology. Of course, the good news is this: if I can trust in Einstein's intellect, then I don't necessarily need to understand it to accept it. I can take Einstein's explanations for how photons behave because all the other people who do understand the math can attest that his calculations verify the theory.
     Is such an acceptance of Einstein's reality a matter of faith in science or magic? Is Einstein as much a genie as he is a genius? How wide is the line between understanding space and time as dualities on a single mathematical continuum and understanding the interconnectedness of disparate lifeforms through transcendent spiritual connections? The line between the magical and the mundane maybe thinner than you think.
     Once upon a time, shortly before humans gained the ability to write things down, brutish local kings began the practice of demanding a portion of their subjects agricultural output as payment for keeping other brutish thugs from killing the farmers and pillaging everything they worked so hard to grow. As territories were established through tribal warfare, indigenous shamans appeared and began convincing these local potentates that every region had its own set of invisible deities who could be coaxed into either helping or hindering the efforts to make war with their nearby rivals. For a small share of the royal gleanings from the food producers, the shamans offered to mollify the nearby spirits and cajole them into assisting their king's military endeavors. The kings – who were looking for any advantage they could find in annihilating their enemies – accepted the shamans' magical assistance, and eventually they decided the priesthood under their employment as spirit-handlers could serve another important function beyond merely keeping the neighboring gods happy; they could also make themselves useful as trusty tax collectors. Here's why: first, sending the king's own warriors to collect the food tithing meant being several warriors short if they were suddenly invaded by their enemies, and second, the kings wondered whether it might be a mistake to trust their warriors not to cut separate deals with the food producers who could then turn around and use the payments they collected to finance their own rival regimes.
     Ironically, when shamanic priests began working as tax collectors in these early civilizations, their need to keep annual records of how much food was delivered by individual farmers actually served to increase their reputation as mediums of the supernatural. Although it may not strike you, for whom literacy may seem as ordinary as breathing, that writing would be regarded as extraordinarily magical, imagine how mind-boggling it would have been to ancient peasants to have some stranger know exactly how much grain you gave the king the year before and the year before that. At first, the lines etched on clay tablets (which would later be replaced with paper made from papyrus) were nothing more than marks that indicated a one-to-one correspondence with the measure of grain that had been collected; however, in one of the greatest intellectual leaps the human race ever made, some priest/tax collector had the brilliant idea that if a small mark can represent a number, a difference small mark could actually represent what type of grain had been collected, and the whole concept for writing as a symbolic act of representation became the priesthood's greatest secret. When asked, “How can you possible know what we gave to the king last year?”, the answer “It's a magic known only to our initiates” only served to solidify and enhance the priesthood's reputation as sorcerers and necromancers.
     Arthur C. Clarke, an influential British science-fiction writer, once postulated “Any sufficiently advanced technology is indistinguishable from magic.” By this he meant that if someone has no idea how a technology could work, it might as well be considered magic. Imagine, for instance, if you could travel back in time a mere 100 years and demonstrate the capabilities of your smart phone to an obscure clerk working in a Swiss patent office. Even the smartest person on the planet would have to wonder if shown a small device the size of a bar of soap that you could record video images and sound or ask it random questions and get answers, if there might be an actual genie confined within. Travel even further back to the earliest stages of human history, and you would see just as much amazement from people by merely demonstrating how you could record their thoughts on paper.
     The point of the story here is this. Given the human inclination to accept magical explanations for natural phenomenon, we need to be cognizant when one type of expertise is conflated with another type of expertise. Science, for instance, can give us tremendous insights into the workings of everything from electrons to star systems, but the expertise of science is limited by its dependence on a methodology that can only make pronouncements regarding physical, measurable phenomenon. The cultural spheres of ethics, laws, morality, and art are entirely beyond the measure of science because they are manifestly not physical phenomenon. This means while Albert Einstein may have been the world's smartest person when it came to theorizing the quantum energy that could be unleashed by splitting atoms, it does not mean he would be any smarter than any of the rest of us when it comes to making the political decision to actually use a nuclear weapon to achieve a vital social objective. By the time Einstein, late in his life, argued the necessity of doing away with nuclear weapons, that genie had already escaped his bottle once and for all.

Sunday, September 2, 2012

Do You Believe in Magic (Part 2)

I want to reach out and grab ya -- Steve Miller

Last week, I began a discussion of the relationship between magic and reality by introducing the rhetorical concepts of ontology, epistemology, and doxa.  To briefly recap: Ontology refers to the philosophy of “what is real?” so an ontological question regarding magic is “Does magic exist?”  Epistemology is concerned with understanding the nature of knowledge, and thus, an epistemic question would be, “What can we learn about magic?”  The study of Doxa considers the intersection between reality and knowledge and, thus, questions the general assumptions that typically go unchallenged because popular opinion would not think to dispute them (Doxa sometimes operates under the pseudonym “common sense”).  A good example of a doxastic question regarding magic would be, “If superstitions are the foolish convictions of a bygone era, are there any contemporary magical beliefs that we could differentiate from ordinary, traditional folklore?” Doxa might also ask, “If one begins with the assumption that science can “disprove” all things magical, how can magical ideas survive the razor of scientific scrutiny?”

The purpose of this blog is not to answer these questions, but to survey the rhetorical landscapes from which other people attempt to answer these questions.  One of most basic assumptions rhetoricians make is the understanding that where questions get asked and who gets asked to answer them have significant and inevitable consequences for the answers that are produced.  This is to say, for example, if you ask about the propriety of drinking alcohol, not only do we need to pay attention to whether we are asking a doctor, a lawyer, or a priest, we may also notice a conspicuous difference between answers originating in Rome, New York, or Salt Lake City.

In coming months as the topics jump from one “out there” or “peculiar” belief to another (the existence of bigfoot, ghosts, space aliens, and The Bermuda Triangle to name a few), perhaps the most fundamental question this blog will explore will be the question, “Do people have a right to maintain odd beliefs especially if the vast majority of society consider these beliefs to be eccentric, antiquated, or irrational?”  My simple answer is “Yes, people have a right to maintain strange or far-fetched ideas.”  However, the other side of the coin remains just as viable. If people maintain some inalienable right to strange ideas, then others also have a right to challenge the veracity of those strange beliefs as well.   Sometimes it is in society’s best interest to challenge the risk of fringe ideas when they threaten the welfare of others who could be harmed by those beliefs.  If, for example,  parents belong to a religious sect that fundamentally believes it is immoral to seek medical care for their children, I would argue the state’s concern for the welfare of these children supersedes parental rights when lives are on the line.  I would say the state absolutely has the right to intervene when a six-year-old is in dire need of an emergency appendectomy.

Returning, then, to such questions as “Is magic real?”, “What can we know about magic?”, and “Are magical beliefs the foolish hangover of medieval thinking?”, it is important to pause to consider “Who are we asking?” and “What do we think makes someone qualified to answer these questions?” before being ready to accept “authoritative” answers.  When dealing with ideas that most people might consider improbable or dubious, issues of credibility become paramount.  

Frequently in contemporary society, popular opinion is shaped by the political ideology of famous people who may have nothing but their fame to support their perspectives.  Fame, in and of itself, does not offer expertise, and thus, celebrity brings nothing extra to support the opinions of people who just happen to find themselves famous enough to be put in front of microphones.  Furthermore, many celebrity pundits who develop massive followings on pseudo-news channels admit in private that they do not actually subscribe to the ideas they loudly and vehemently promote on their television or radio shows; when challenged, they sometimes defend their distortions by claiming that they are not really “news-people”  but are “entertainers” and that, somehow, this gives them license to say whatever they believe will give them their highest ratings regardless whether their propaganda has any relationship at all to any known “facts” or “truth.”

Unfortunately, the popularity of news channels that are thinly disguised vehicles for propagating corporate disinformation have eroded many otherwise intelligent people’s ability to question rhetorically the correctness of what they are being told.  For many people, truth has become so dislodged from any ontological expectations of reality that these viewers have been left enfeebled by the cynical calculations of whatever corporate hucksters think they can get by with.  When McDonalds, for example, advertises that they are “the official restaurant of the Olympics,” how many of their patrons actually believe the world’s top athletes train for years to reach the peak of their physical abilities only to consume a hearty meal of Big Macs and fries before competing on the world stage?

When it comes to the the ontological, epistemic, or doxastic truth of magic, it is not my place to say what is (or is not) real, what can (or cannot) be known, or what should (or should not) be popularly accepted as true.  It is my place, instead, to point out where misleading rhetorical intentions can divert people from finding their own answers.  Science, for example, can tell us plenty about the natural world, but it is ill-equipped to evaluate the truth of phenomena it cannot determine how to measure.  The window of truth that science offers us affords a breathtaking view on reality, but it would be a substantial mistake to believe that any other perspectives (from any other windows) only serve to distort our ability to know “what is out there.”  I will have much more to say about this, of course, in the weeks to come.