For those of you who are students currently studying the moral arguments that pertain to controversial issues, if you are simply a life-long learner with an insatiable thirst for knowledge, and/or especially if this is your first time wading into the intimidating realm of passionately polarized issues, here is a little background information regarding the subject of applied ethics that you may find helpful…..
• Applied Ethics Terminology and Background Information
Before you delve into the various controversial topics that throw our country into debate, give birth to catchy news headlines, and catalyze idealistic up-roars, it is necessary for us to establish specific, important nomenclatures that are often needed in order to discuss and debate topics properly. Think of this as putting on your armor before you go into battle or as an intellectual stretching session before you complete a mental marathon.
The discussions surrounding contemporary ethical issues are often comprised of a series of terminological dichotomies. First and foremost, are the schools of thought employed by those who stopped to consider which types of human behavior can be considered right and which are wrong.
The word philosophy comes from the Greek, “philia sophia,” which means “love of wisdom.” Just like medicine, there are several branches of philosophy. When you meet a doctor, you would probably ask, “What kind of doctor are you?” And she might answer, “I’m a pediatrist… or a neurologist… or a cardiac surgeon…” or whatever. Likewise, when you meet a philosopher, it is necessary for clarification purposes to ask, “What kind of philosopher are you?” (For the record, if he dodges the question, or doesn’t know what you’re talking about, that means he’s just unemployed or going through a midlife crisis.) There are many types of philosophical studies and pursuits. They include, but are not limited to…
This branch of philosophy deals with questions relating to those that are “beyond (from the root word, “meta”) the realm of physics,” such as, “What is the nature of substance?” and “What is the meaning of life?”
From the Greek “episteme,” one of the many Greek words for “knowledge,” this is exactly what epistemologists are concerned with. How do we know what we know? What is truth? Is knowledge more valuable than belief? Can my senses be trusted?
Derived from the Greek word, “aestheikos,” meaning “sense perception,” aesthetics is concerned with the nature, categorization, and identity of art and beauty. Many art, film, and food critics/columnists (the educated, good ones anyway) studied aesthetics. They want to know, what is art? What is good art? What makes a thing valuable? Is there a definition for beauty?
Concerned with the nature of, and existence of, God, ontologists attempt to grapple with the validity of omnipotent, intangible existence. They want to know what qualities all religions share, and what qualities with which God is necessarily entangled… and all things theistically related. For the record, religious folk fall into one of three categories…
A.) Theists believe in the existence of a higher power or powers. Monotheists believe in only one god. Polytheists believe in many (more than one) gods.
B.) The word “atheist” stems from the Greek prefix “a,” meaning “no,’ and “theism,’ meaning God… to put it simply. Therefore the word “atheist” means “no god.” And so atheists believe there is no God- that God does not exist.
C.) Agnostics suspend judgement and choose to wait on making assertions regarding God’s existence until death befalls them. Since agnostics recognize that there is no proof either way, they simply refuse to weigh in on the god question at all. The word “agnosticism” is quite a fitting title because, as explained a moment ago, “a” means “no,” and “gnosis” refers to knowledge. Agnostics assert that they possess no knowledge of God’s existence or nonexistence. Some agnostics even claim that no mortal can possibly know that God exists or does not exist, and so they may believe that theists and atheists are deluding themselves with their mistaken assertions of knowledge.
For more on agnosticism and faith classifications in general, read “True Believer” and “An Unlikely Friendship,” respectively, on morellaty.com.
Just like the name implies, they are concerned with politics and law. The pursuit of political philosophers include questions regarding the “best” form of government, the proper way to coexist with one another, and the hierarchy of decision-makers.
Ethicists are fascinated by human behavior. They are amalgams of the unlikely cross between psychologists, anthropologists, philosophers, and mediators. They are perpetually concerned with the “right thing to do,” and with wondering why we can’t “just all get along.” There are many kinds of ethicists, including bioethicists, applied ethicists, rights theorists, ethical historians, and theoretical ethicists. Ethical study involves intense observation and being a slave to logic, reason, argument construction, and truth.
The word, “morality” often refers to a personal set of values or ideals that one employs to live one’s life. “Ethics” is the academic pursuit, or study, of how human beings ought to behave. In other words, “ethics” refers to whether or not a human action can be seen as right or wrong. The difference between ethics and morality is a subtle distinction, because the two terms are often used interchangeably, and there’s nothing wrong with that. If something is moral, as opposed to immoral, then it is also ethical, rather than unethical.
The philosophical branch of ethics is comprised of different types of tactical approaches to the normative study, and each of type constitutes a specific belief system.
Relativism refers to a system of thought corresponding to the notion that doing what is normal is synonymous with doing what is right. And what is abnormal is wrong. In other words, “When in Rome, do as the Romans do.” “Cultural relativism” specifically applies to the idea that if a given behavior is normal in a certain culture, then one ought to conform with that particular cultural normativity. This school of thought arose out of a time in history in which anthropology was a popular form of entertainment as well as a scholastic endeavor. Most college freshman tend to be relativists. And this fits in nicely with their indoctrinated grade school experience. After all if high school teaches us anything, it’s how to at least appear to be normal…. or else suffer the consequences of ostracism and loneliness! Despite the ample objections against it, relativism has its place amongst serious ethics. When it comes to etiquette, relativism can be quite useful in worldly travel. In Japan, one should abide by the cultural norm of bowing lower than your “superior” in order to show respect. In the Middle East, make sure you shake cans with people using your right hand and never your left. However when it comes to more pressing issues, relativism loses its logical foothold… Especially in regards to human rights-violating atrocities. For example, if you were to, say, take your young daughter to Egypt or Saudi Arabia on a vacation visit your hypothetical homeland, you probably wouldn’t exactly be happy about the idea of submitting her to the custom and cultural idiosyncrasy of female genital mutilation! And if, for economic reasons, you made your Middle East excursion more permanent, then I also doubt that you would feel the need to submit your daughter to the cultural norm of corporal punishment in response to her “crime” of texting a male classmate. For these reasons and more, relativism seems to only apply to some circumstances but not others.
“Objectivism” is quite the opposite of relativism. Objectivism refers to the idea that there is a universal system of morals and ethical guidelines that apply to everyone, everywhere, all the time (hence its universality). The difficult-to-pull-off “trick” is discovering and implementing a universal set of moral principles that everyone can follow. While most people mentally “default” to utilizing morality dictated to them by religious scripture or law, true Objectivists find truth in logical reasoning. For them, what a person ought to do is synonymous with whichever moral mandate is the most logical and possesses the least amount of devastating objections against it or counterexamples to the contrary. Logic is just like math, except instead of utilizing numbers, logic uses letters to symbolize arguments. For example, in his book entitled, “Of Logic,” Aristotle professed the beauty, simplicity, and the innate, inherent truth of valid and sound logical “syllogisms.” A syllogism is an argument that takes the form of “If A then B, if B then C, therefore if A then C.” If ethical arguments can be made this way, or in a more complex, but no less valid way, then objectivism is not as unreachable as some might think. Relativists often shy away from objectivism, because they don’t want to seem ethnocentric, or guilty of professing that their own culture is right and therefore the best. For more on the difference between relativism and objectivism, and the inherent truth of objectivism as the better of the two ethical philosophies, you can read “So You Think You Can Dance With Me,” on morellaty.com.
“Subjectivism” is the school of thought that everyone has their own moral principles, and therefore everyone ought to conform to their own sense of what is right and wrong. Since subjectivists don’t perceive morality as universal, and as being subject to personal whims, they have a lot in common with moral nihilists.
“Nihilism” is the belief that ethics and morality do not exist. They are illusions akin to that of the Loch Ness monster or unicorns. Nihilists are not even skeptics, for even skeptics harbor some sense of hope that an answer is out there waiting to be discovered, and that criticizing firmly established and believed propositions are the way to discovering that truth. (Socrates was a sceptic.) Nihilists don’t even see the point of talking about ethics as, for them, morals do not exist at all. And yet most nihilists are not sociopaths. They tend to operate within their own ethical parameters, thus proving that ethical systems do exist for them, and therefore they must exist for other people too, as they do not suffer from the delusion that they are morally unique. If you are a moral nihilist, then this class is most definitely not for you… Unless you are a rare, unique nihilist who has an open mind, and you are simply waiting to be convinced of objectivism’s existence.
Along with the different ethical systems discussed above, are the two schools of thought that categorize the way human beings accumulate knowledge and label propositions as either true or false.
“Empiricism” is the idea that, in the pursuit of discovering truth, a proposition is a true one, only if it can be perceived through one or more of the five senses. Empiricism is a very scientific-laden school of thought, and conjures up mental images of experiments and the scientific method. Empiricists vehemently do not believe that one can uncover truth-claims through simple logical introspection alone.
“Rationalism” is the belief that, while truth can most definitely be uncovered through the senses, there is also another way in which one can reveal truth. Rationalists believe that truth can be known simply through logical introspection alone. This is why rationalists are often dubbed “armchair philosophers,” because of the mental image of them sitting in an armchair wrestling over the meaning of life. Saint Anselm of Canterbury was a famous rationalist who believed he could prove the existence of God simply by thinking about God and God’s logically necessary qualities. In summary, Anselm contended…
ANSELM’S ONTOLOGICAL PROOF FOR THE EXISTENCE OF GOD
Premise 1: God is the greatest conceivable being. There is no being I can think of who would be greater than God. In other words, God is perfect.
Premise 2: It is greater to exist then to not exist. A nonexistent entity would not be as perfect as an existent entity.
Conclusion: God must therefore exist.
(As you might have noticed, Anselm utilized Aristotle’s syllogism, “A->B, B->C, A->C.”
(However, David Hume, perhaps the most famous empiricist, objected against Anselm’s argument. There are several more objections against Anselm’s rational proof, but Hume’s is perhaps the most devastating. Hume wrote the following….)
HUME’S PROBLEM OF EVIL
P1: Perfection necessarily entails the possession of certain qualities.
P2: These qualities include omniscience, omnipotence, omnibenevolence, and omnipresence.
(In other words, God is all-knowing, God is all-powerful, God is all good, and God is everywhere all the time.)
P3: Evil exists.
Conclusion 1: So god either doesn’t know about the evil, can’t stop the evil, doesn’t care about the evil, or isn’t there when the evil happens.
Conclusion 2: If conclusion one is correct then God cannot be perfect, and therefore does not exist (since Anselm’s conclusion that God exists rests on the premise that God is perfect).
Hume’s argument is based on his sensory observation that evil exists in the world. He stipulated that evil comes in two flavors…. Natural evil would include circumstances such as erupting volcanoes, earthquakes, and hurricanes. Man-made evil would be things like rape, murder, and theft. Now Hume’s contentions are open to objection as well. For instance, Hume’s idea that God is all good may not be concurrent with religious scripture, which stipulates that God is simply all just (He doles out justice, not goodness). Furthermore man-made evil may be a necessary evil, in that it allows for the existence of free will. Perhaps God realized that a world in which evil existed was a better world than a world in which humans, who could not choose between right and wrong, were akin to mindless, programmed robots. However, this does not deal with the problem of the existence of natural evil. Two thirds of the victims of the 2004 Asian tsunami were children. About the only retort theists can come up with to explain a god who would allow that many children to die from a natural evil is, “God works in mysterious ways.” (Perhaps this ambiguous (and not all that comforting to those who lost their children) explanation was not good enough for Hume!) Feel free to flex those philosophical brain muscles and “discuss” the empiricist versus rationalist arguments for God’s existence. If you would like to read more about this subject, feel free to explore and read the articles, “The Incompatibility of God’s Omniscience with the Notion of Free Will,” as well as, “Are You There God? It’s Me, Narcissus,” on morellaty.com.
Along with the varying theories regarding truth perception, are the two divergent ethical systems that categorize different objectivism-based philosophies. After all, if we are going to try to discover what system of ethics everyone in the world should abide by, we have to start somewhere!
-Teleolgical Ethical Systems-
(also known as “Consequentialism”)
“Teleology” is the subject heading under which all consequentialist-based ethical systems, which analyze the possible outcomes of a given action in order to determine whether the action is a moral obligation, fall. In other words, when you are trying to figure out whether or not “action x” is the right thing to do, you should first attempt to predict what sorts of consequences “action x” might cause. If the possible repercussions are favorable, then you should engage in the action. If the consequences are devastating or unwanted, then you should refrain from the action.
“Egoism” is a consequentialist ethical mandate, made famous by Ayn Rand and others, which stipulates that a person only ought to do the actions that are in that person’s best interest. There are in fact two types of egoism. “Psychological egoism” is the theory that people cannot help but do whatever is in their own best interest. “Ethical egoism” is the theory that people ought to do whatever is in their own best interest. Psychological egoism is an observation based on human behavior and the study, and arguably “soft science” of psychology. Ethical egoism is the theory that, based on the truth of psychological egoism, since we cannot help but do what is in our own best interest, then we ought to conform to that uncontrollable human behavior. Objections against this theory include, but are not limited to, the logical quandary that one can be an ethical egoist, but the only ethical egoist, in a large peer group. This is due to the fact that if I am an egoist, and you are an egoist, and I need you to do what is in my best interest, but not yours, you will not do it, and in fact you should not do it. (Please Note: Ayn Rand, author of The Fountainhead and Atlas Shrugged, offers a theory of objectivism that varies greatly from the one above. Objectivism was firmly established in the philosophical world before Rand “borrowed” the term to correlate with her theories. Why she didn’t simply choose a term that hadn’t already been claimed, or make up a new word altogether, is beyond my scope of understanding. However, this fact represents a good reason why you should never use Wikipedia as a study guide. Wikipedia usually defines the most popular version of technical and academic terms, and it therefore may fail to provide you with the particular definition that you are looking for.)
Invented by philosopher Jeremy Benthem, and previously contributed to by the worship of the Greek god Dionysus, hedonism maintains that what is right is whatever brings you and others the most pleasure. And what is wrong is whatever brings you and others pain. So our life goal should be about pursuing pleasure and avoiding pain.
John Stuart Mill, a student and contemporary of Benthem, tweaked Benthem’s theory based on several objections and problems he had with it. First, he maintained that hedonism was sort of a “pig philosophy.” After all, any animal with a nervous system can experience pleasure. A dog scratching his hindquarters feels pleasure when he does it. But a dog scratching himself is not doing anything morally righteous. Mill claimed, that under Benthem’s theory, it would be better to be a pleasure-loving pig, than a discontent human being. More to the point, human beings seem to be exclusively capable of experiencing a higher form of pleasure. Human beings can feel happiness. Happiness, according to Mill, is an intrinsic good, whereas pleasure is simply an instrumental good. Instrumental goods are goods that help us achieve the goal of reaching an intrinsic good. Intrinsic goods are simply good in and of themselves. They are innately good, or good for their own sake. For example, some people view education as an instrumental good- something you use so that you can one day land a good job. But others view education as an intrinsic good. Even if you never use what you’ve learned, you are a “better” and more enlightened person for learning it. Mill conceived of utilitarianism as a way to produce the most amount of happiness for the most amount of people. And when fretting over possibly moral versus immoral actions, one should first determine which action will produce the most amount of happiness for all involved. For example, imagine that while a group of people are hiking, they come upon a cave with a narrow entrance. In order to save their own lives, they must contemplate the repercussions of blowing up a fat man, by strapping him with explosives, who has become wedged in the entrance of the cave, or allowing themselves to all succumb to starvation and death. By employing the merits of utilitarianism, the hikers reason that more people will achieve happiness if they do in deed choose to blow up the unfortunately stuck fat man. Yes, the obese man’s family, and perhaps his local Taco Bell and Burger King restaurants, will be notably upset by his demise, but much more unhappiness would be produced if the large group of trapped hikers died by withholding the fat man’s explosive exit from the world. The only glaring problem with utilitarianism, is that it necessarily entails fallible human beings attempting to predict the future. After all, no one is psychic. Perhaps after a day or two, the fat man would have lost enough weight to be able to pull himself free from his rocky confines, thus freeing all of the hikers to join him for a celebratory feast at Burger King. In all of the years that I have utilized this example to explain utilitarianism to my students during lectures, not one of them thought of the fat man’s inevitable weight loss as a solution. Again, this is because human beings are limited by fallibility, and an inability to think creatively, especially during circumstances that are susceptible to panic-induced thinking and frantic desperation. (However, the lack of creative problem-solving could have been due to the fact that I have explained “the fat man stuck in a cave” scenario in many different ways. I think one of the ways involved everyone in the cave drowning as opposed to starving.)
-Deontology and one of its subscribers, Kantian Morality-
An ethical theory that arguably qualifies as a type of “Deontological system”, or a system that is rule-based and relies on fulfilling one’s moral duty, rather than predicting consequences, Kantian morality was thought of by the brain of Immanuel Kant. He reasoned that logic, and an almost a “mathematical” equation, could show us which moral mandates to follow. Kant claimed that by “universalizing the maxim,” or applying the questioned moral obligation to everyone, everywhere, all the time, a person could discover whether or not he or she should go through with the Maxim, as long as they avoided both chaos and contradiction. For example, if I want to know whether or not I should help a little old lady cross the street, I must reason, “what if everyone always helped little old ladies across the street when they needed help?” Would there be chaos? Would I be left with a contradiction in terms? Since the answer is “no” to both of these questions then I should help the little old lady cross the street. Conversely, if I promise to help someone move, and then I wonder whether or not I should keep that promise, I must universalize the Maxim of promise-keeping. So if everyone, everywhere were to break promises whenever they saw fit, then there might be chaos, but more to the point, there would exist a contradiction in the word “promise.” The Word “promise” would cease to hold any meaning, since the only reason the word “promise” carries any weight is because it holds power over a person to keep their word. Therefore, I ought not break my promise and I should help my friend move. So in summary, universalize the maxim. No chaos and no contradiction= do it. Chaos or contradiction= do not do it.
Put forth in the public vernacular by Thomas Hobbes, and the second and final Deontological system we will discuss, Contractarianism is the idea that, when creating moral mandates or, more specifically, laws that will affect people, the citizens of a given populace are entrusted to be “blind to the traits of themselves.” Citizens trade freedoms for protections in order to ensure the safety of themselves and the public. Long ago, before moral mandates and laws existed, Hobbes dubbed this time as “The State of Nature,” in which he claimed life was “nasty, brutish, and short.” While ultimate freedom sounds like a good thing, the freedom to kill whoever you want and steal whatever you want without reprisal is actually very scary. Due to the fear that there was nothing keeping people from enacting their ultimate freedoms, citizens willingly exchanged freedoms for the protection that they could not be stolen from or murdered without some sort of egregious harm happening to the robber or the murderer. This guarantee was a good way of preventing those who would happily utilize their ultimate freedoms for personal gain. But in order to insure that laws and moral mandates are applied fairly, one must be “blind” to one’s own traits in order to guarantee that if something unfortunate happens, the individual citizen will be protected. For example, when making a law to protect my life, I must pretend that I do not know I am white, educated, young, able-bodied, a woman, etc. That way, if something unfortunate happens that disrupts my current state of being, I will be protected nonetheless. If I fall off a building tomorrow, and become disabled, I have already set up a law that protects me and guarantees my future survival. Now granted, chances are pretty good that I will not spontaneously “change races” (become black) tomorrow, however it is important that I protect the rights of those who are a different race than me, since I have to interact with people of different races on a daily basis. As happened with the civil rights movement, as well as the suffrage movement, races and sexes that are not treated equally and fairly can make life very difficult for those who are not marginalized by society. So according to Contractarianism, the moral mandates we ought to follow would be those in which freedoms are exchanged for protections, while being blind to the traits of oneself.
And that about does it for all of the immediately necessary terminology you will need to know, and perhaps utilize later, when you learn about the different controversial issues that plague contemporary society. (Other terms soon to follow.) It might behoove you to reflect upon which objectivist system you personally operate by. Then you can utilize that system to analyze the moral merits of each ethical issue as we come to it. Also remember, your first exam will include questions regarding these diverse terminologies… So it is a good idea to attempt to simplify the list in your mind (dumb everything down, like mental Cliff’s notes) and commit it to memory.
• Ethical Equations….
Those who chase after an objective, normative ethical system, often make the mistake of finding common ground where there is none. In other words, pursuing an ethical endeavor for the first time tends to lead to faulty intellectual connections. This happens due to the fact that it is much easier to replace an incomprehensible concept with a concept one understands than to actually take the time to learn the definition of the unknown term.
Imagine you are on a date in a fancy foreign restaurant. You ask your date if he or she wouldn’t mind summarizing a delicious-sounding, but unpronounceable dish, and your request is met with the answer, “It’s kind of like an Italian chicken dish.” The truth is, your date truly believes he or she is correct, that he or she has simply replaced the confusing entree name with a simpler one, but when your meal comes, you discover that what lies before you is far from “an Italian chicken dish.” While the reality of this metaphor would cause nothing more than an unexpected dining experience and perhaps a thinner wallet, the intellectualized application of this concept produces stymied social and moral progress.
Many people are guilty of often equating “is” with “ought,” perpetuating what the famous, and previously mentioned, philosopher David Hume called, the “is/ought distinction.” In response to inquiries regarding human motivations for unethical and violent behavior, my students have, in the past, held a bit too much reverence for Darwin’s concept of “survival of the fittest.” In other words, many people seem to believe that if ‘Person x’ CAN do ‘Action y,’ then Person x’ SHOULD do ‘Action y.’ But as Hume distinguished, “Just because something IS the case, doesn’t mean it OUGHT to be the case.” So even though many men (or women) on this planet are in a physical, professional, or political position powerful enough to hold sway over smaller, weaker, or marginalized individuals, that does not give them the right to use Darwin’s observations to justify bullying or intimidating those individuals to benefit their own self-interest. After all, every night, while my 190-pound, 6′2″ husband sleeps, I am in a position to kill him. And no, this is not something I fantasize about….. often. This is because, no matter how big and strong a person is, no one is capable of defending themselves while unconscious. But simply because I CAN quickly and easily (and admittedly, too efficiently than would be consider normal for a wife) murder a sleeping person, does not mean I SHOULD.
Furthermore, it is important to point out that the law is not always synonymous with what is right. Legality is not necessarily equal to morality. As a matter of fact, the law is always “playing “catch-up” with morality.” Whenever a given law is established, it’s always because a person, or group of people, realized the moral merit of a given concept FIRST, and then, or therefore, implemented a rule to apply the concept with attached reprisals in case of nonconformity. Case in point, slavery was always immoral, but more than several generations went by before anti-slavery laws were put into commission. Likewise, women have always morally possessed the to right to express their political ideas, but it took well over 100 years before this right was legally recognized (in the U.S.). Currently, the federal government legally defines marriage as being between a man and a woman (DOMA), but if this ever changes, I highly doubt people will claim, “Okay, NOW gay marriage is morally acceptable, because it’s legal. But it certainly wasn’t before.”
The law acts as an authoritative source, one we are convinced to obey without question and accept operates with good, sound, and justified reason. However, this blind acceptance and obedience can be harmful. In ancient Greece, Plato wrote about the dialogues of Socrates. In one such account, Socrates discusses with a student the theoretical morality of the gods. He asks the young man if certain actions are pious (moral) because the gods say they are pious, or if certain actions are innately pious, and this is why the gods want us to perform them. If the former is true, then the gods are arbitrary, randomly picking and choosing various actions at their leisure, with no reason for dubbing each action as pious or impious. But if the latter is correct, then the gods are unnecessary. After all, if the gods simply want us to act piously through the performance of good acts, then we don’t really need them to tell us which actions are good and which are bad. We can simply find the common trait that all good actions share (e.g. discover the quality which makes them good), and this will allow us to categorize actions as good or bad, or as actions we should either do or avoid. But either way, we have no use for the gods (or even a single, omnipotent God), since an arbitrary god is one we should neither follow nor worship (as an arbitrary nature is unjust, fickle, and unwise), and an unnecessary god is, well… unnecessary. Socrates’ dialogue describes a very astute truism as it pertains to the law and morality. Just as morality and legality do not necessarily go hand in hand, morality and religious mandates are also not always synonymous. While it’s certainly possible for a religious command to be morally righteous, the mere fact that it is a part of “holy” scripture, does not automatically make it morally obligatory. For proof of this, check out The Holy Bible’s Deuteronomy section. It is just “chock-full” of surprising authorizations that contemporary society would never agree are morally right.
This difference between morality and authority-derived declarations harken back to most of our teenage experiences, in which we were too young to be fully self-sufficient, yet old enough to listen to and understand reason. If you were like me, then your parent’s demand for you to abide by an unreasonably early curfew stretched far beyond the boundaries of childhood. I remember being 18,19, even 20 and 21, and my mother still forced me to be home by midnight whenever I came home from college in the summer. I remember asking her why, now that I was an adult, should I have to abide by this curfew? To which she replied, as mothers often do, “because I’m your mother, and I say so.” However, as an intelligent now-adult, I felt that I deserved more of an explanation than that. The automatic acceptance of the much-loathed “because-I-say-so” response is equivalent to obeying an authority-derived mandate. Upon my pressing her, my mother finally admitted that she wanted me home by midnight because she would stay up all night worrying until I was home safe in bed, and she wanted to go to bed by midnight instead of two or three in the morning. (Of course I told her to stop worrying and just go to sleep, dammit.) The transition from abiding by an authoritative command to demanding a semi-reasonable justification is equivalent to the educative growth process that all students must go through in order to become rational, reasonable, and logical human beings.
People often start out as relativistic and accustomed to doing what they’re told by people in superior positions, but then, when they are introduced to the merits of reasonable justification, they begin to see and act in the world much differently. They begin to question and criticize and become skeptical of the world around them. And this is a good thing. A very good thing. “Doing what’s normal” and accepting the world as it is involves an “everything-is-fine-so-why-change-it?” type of attitude, and it can actually be quite dangerous. It promotes complacency and ignorance, and it leaves no room for change, growth, innovation, invention, improvement, evolution, development, progress, or equality. Martin Luther King, Jr. questioned his world, and the end result was an improvement in the archaically perceived status of African-American citizens. Albert Einstein questioned the realm of physics, and it resulted in a massive paradigm shift in regards to the way the world viewed the universe. Jessica Valenti questioned the value of female virginity. Gloria Steinem questioned the limited designated roles of women. Eleanor Roosevelt questioned the role of the First Lady. Barack Obama questioned the morality of the American healthcare system. Richard Dawkins questioned religion. Peter Singer questioned our treatment of animals and our reluctance to contribute to charity. Gene Robinson questioned the “link” between the church and homophobia. And so on and so forth. With little room for disagreement, all of these questions lead to the occurrence of good things. As you begin to read the articles on this site, hopefully you will question the world around you, too.
You should be skeptical of many things that Americans are encouraged to swallow without question. For example, comparative statistics that are, on the whole, widely excepted should be the first in line for your critical endeavor. These statistics are often used to change human actions and perceptions. To discuss just a few of them, the supposedly valid statistic that you are more likely to die in a car accident than plane crash is one of my particular favorites. This comparative study should not make you more comfortable when traveling by plane. The reason more people die in car crashes than in plane crashes is because people drive in their cars far more often than they fly in planes. There’s also the little “gem” which encourages us not to fear sharks, since far more people die from bee stings than from shark attacks. The reason we ought to be critical of this statistic is because human beings spend the vast majority of their time on land and not in the ocean. And bees live on the land and sharks live in the ocean. If human beings lived in the ocean, then this faulty statistic would indicate the opposite. The final statistic up for discussion, and one that particularly cracks me up, comes from a commercial for the drug Valtrex. The commercial’s narrator states that two-thirds of all patients with herpes contracted the disease from their partners when their partner showed no signs or symptoms of an outbreak. If you stop to think about it, you must question “where did doctors get this information?” The only way physicians could have known whether or not a herpes patient contracted the disease from a partner who showed no signs or symptoms, would be from the patients themselves. And who in their right mind is going to confess to their doctor that the person they slept with was covered in genital warts but they decided, “What the heck? Let’s have sex anyway.” So this statistic is most likely based on patients lying to their physicians in order to save face and avoid embarrassment.
It is particularly applicable to question professed facts and figures during political campaigns. One must ask, where did the candidate get their information from, and who were they asking? For example, Mitt Romney’s assertion that 47% of Americans do not pay taxes, created a flurry of fact-checkers attempting to verify his statement. According to New York Times economists, this assertion is actually true. The problem is that it had no effect of validating Romney’s larger point that half of Americans are lazy, and do not pay taxes because they think they can get away with a free ride on the backs of tax-compliant citizens. The reason that 47% of Americans do not pay federal income taxes is either because they are too poor, are retired, or are simply taking advantage of the tax cuts and breaks set up by a previous Republican administration. In fact, a very small percentage of Americans neglect paying taxes due to a desire to “stick it” to the government. (If you would like to read more about the other morally significant problems associated with Romney’s Mother Jones-recorded statement, you can read “Nitwit Mitt,” on morellaty.com.)
Hopefully this reading has lit a fire under your seat, and you will become vigilant observers and critics of your own particular paradigmatic situation. But most of all, I hope it encourages you to label actions as “right or wrong” based on logical introspection and rational reflection, and not because it is the law, your religion tells you so, or because you “read it somewhere on the Internet.” And this now leads me to my final point for this section of reading. Not everything you read is true…. especially if you read it on the Internet… or on Wikipedia. I implore you to learn the difference between credible research and publications made by vanity press or posted on what I like to call “vending machine” websites. Technology is both a blessing and a curse. We are now able to easily “Google” any curiosity and out pops thousands of answers. This can be extremely informative, if you know where to look for the right answer, instead of just some guy’s opinion. If you don’t believe me, try typing the word “holocaust” into any search engine, and you will find dozens and dozens of valid and factually correct research on the subject, but you will also inevitably find many, many websites that assert the Holocaust never actually happened. So if it is your goal to validate your opinion that the Holocaust never occurred, and was a fabricated illusion constructed by Jews to elicit sympathy, you will find an assortment of “publications” that support your opinion, but none of them will be true. In other words, you stick in your dollar, and you get what you want.
Learning to tell the difference between valid, sound research and fabricated data is a weakness in new college students. One helpful hint would be to look for research conducted by prestigious, vetted organizations instead of by a small group of people, a group that has a specific agenda, or one, single person. Also, if you’re looking for research pertaining to a medical phenomenon, you should try to find it on a medical website like the New England Journal of Medicine, Scientific American, or the American Medical Association. Likewise, if you need research pertaining to psychology, you should stick with a website hosted by the American Psychiatric Association or the online version of the DSM4. (Or your country’s equivalent of these, of course.) For example, when we study the ethics of abortion, in order to denote when a fetus is capable of feeling pain, the proper research into this would be through the American Medical Association or perhaps the Royal College of Obstetrics and Gynecology…. and not (read: never!!) on a website hosted by a pro-life organization that plasters cute pictures of newborn babies, or even a pro-choice organization. This should just make logical sense, however it is information of which many college students are wholly ignorant. At any rate, I hope it helps you with any future research, and I hope this section of reading has been useful toward our ultimate philosophic pursuit of objective truth as well.
• “Proving a Negative….”
Sometimes, when debating those who disagree with you (or you yourself my find you’re guilty of this logical faux pas), you may find yourself in the awkward predicament of being asked to “prove a negative.” This happens in a variety of ways. You may be asked or you may ask someone else the following….
-”Well, you can’t prove God DOESN’T exist either!”
-”You can say abortion doesn’t cause infertility, but can you prove it’s safe?”
-”How do you know suicide/euthanasia isn’t a sin?”
When this happens, simply alert the offender that he or she is guilty of the logical fallacy “proving a negative.” And then, simply show him or her the ridiculousness of the request by asking the offender to “prove” the following “negatives.”
-”Can you prove that Saturn isn’t occasionally orbited by a microscopic bowl of macaroni and cheese?”
-”Can you prove that 12 trillion years ago Lord Zenu didn’t come to Earth to release human souls into volcanos?” (FYI: This is what Scientologists actually believe! Try watching a Tom Cruise movie now! I totally can’t take him seriously any more… and keep picturing him in a padded room while he’s kicking ass on screen. Plus, he’s five-foot-six, and I could kick his ass, thus making his credibility as a spy greatly diminished. While we’re on the subject, I can’t watch the movie “300″ either. Think about it. Back then, the average man was like five feet tall. (So the man-to-elephant ratio in that movie was actually surprisingly accurate.) It’s tough to take the brutality seriously when you picture such little people doing it. If one of them attacked you, you could just extend your arm and hold their forehead as they fruitlessly swiped at the air.) But I digress…. You could also equivalently ask someone to prove the negative of…
-”Can you prove unicorns, dragons, and leprechauns DON’T exist?”
In other words, anyone who believes in the merit of “proving a negative” must also logically and consistently believe unicorns are real. Ah…. Mental ass-kicking is satisfying. Take that, Tom Cruise. He-ya!
There are in existence many, many other logical fallacies. Perhaps this is a testament to our lack of intelligence as a species or the fact that the ability to argue well, or even “be logical,” is a rare talent. More accurately however, is the fact that, as a whole, human beings are not as “inherently logical” as we attest to be. Maybe that’s why the dude who invented the show “Star Trek” (Yes, I am glad I am not nerdy enough to know his name…. but am plenty nerdy enough in other ways), had to invent an entire non-human race of people (the Vulcans) who are logical to their very core. For a complete list of logical fallacies, follow the link==> http://www.philosophicalsociety.com/logical%20fallacies.htm.
• Rights and Relationships…..
As you will quickly discover while on your journey to intellectual betterment, there are many words in the English language that we use without truly being aware of their meaning. The word “right” is one of those words… Not in the sense of “right versus wrong” but in the sense of “right to free speech,” and as in, “When one has a right, what does that mean?”
A right is something that you inherit somehow, usually through the acquisition of certain characteristics. It cannot be “bestowed” or taken away- ever- no matter what you do. (Yes, this means that, if life is a right, even if you kill someone in self-defense, you are violating their rights! However this could also be argued as a case of moral hierarchy, violating one right (or obligation) to preserve another… Like if I lie to a spouse-abuser to save his wife’s life. Read on for clarification.) Rights are as innate to us as our own, unique DNA, but we do not attain them based on our genetic makeup. A right is like an entitlement. It is something that we are owed by another. Even though others can physically, as opposed to morally, infringe upon our rights, when they do so, they are inherently doing something wrong, hence the term, “violating our rights.” Having a right necessarily entails that someone else has an obligation. It’s a crucial relationship.
An obligation is like a debt that you owe someone else. We are obliged to fulfill or abide by our obligations to others. For example, if you have the right to live, then I am bound by the obligation not to kill you against your will. We have various legal and moral obligations. Sometimes they coincide, and sometimes they don’t. For example, it very well may be the case that very wealthy people are obligated to give to charity, but if they do not, they aren’t doing anything illegal, but this may simply make them rotten people.
Sometimes we think we have a right, but what we actually have is a privilege. A privilege is sort of like a tentative right, or a right that is contingent upon behavior….. Of course then, this would not be a right, but hopefully you get the point. Privileges must be earned first, and then continually maintained in order for ongoing possession. For example, driving is not a right. No one is obliged to allow you to drive a car. However, if you pass the drivers exam, and abide by the rules of the road, then you may drive, and others should let you drive based on the fact that you met the specified qualifications for driving. (What you do have a right to is equal treatment and protection from arbitrary or biased whims. In other words, if you drive just as well, if not better, than another licensed driver, then you are entitled to receive your license too, not because you have a right to drive, but you have the right to be treated fairly. This right is particularly frustrating for victims of sexist or racist constructions who, just because they are women or of a different skin pigment, are forbidden from receiving the same treatment as others. (This will become particularly evident when we broach the subject of sexist military practices.) But if you, say, drive drunk, then your privilege to drive may be revoked, due to the fact that you failed to maintain your privilege-granted, “tentative right.” For more information on whether an assumed right, the right to live, is actually a privilege, please read, “Not A Right at All,” on morellaty.com.
Options are a little like privileges, except that you do not have to earn and maintain them in order to have them. An option is an action or state of being that you may choose to exercise, but you certainly do not have to. For example, giving to charity may be an action that, while certainly morally commendable, you are not legally “forced” or obliged to complete. Giving to charity, is therefore a legal option for all, and a moral option for most.
Duties are a lot like obligations, except the moral necessity to follow them is not as strong. Duties are often related to occupation. They they can be self-imposed and often idealistic in nature. Sometimes our duties can conflict with our obligations. For example, my duty to be a good mother to my daughter may conflict with my obligations towards others. If a psychopath forced me to choose between maintaining the ongoing care of my child against the life of someone else, I would probably violate my obligation, and thus another’s right, in favor of my preferred duty. Soldiers of war violate this obligation all the time, choosing to be good, obedient soldiers over the obligation not to harm others. Hence, duties are often conceptually idealistic and unfortunately take precedence too many times over our, often more important, and morally necessary obligations.
It may very well be the case that understanding and interpreting these concepts is all that separates us from non-human animals. Please refer to the article, “What Separates Us,” on morellaty.com in order to explore this idea more fully. Now that you understand the specific definitions that correspond to each term, you will begin to discover that people who really ought to know the difference between these terms misuse them all the time!
Politicians are especially good at misusing the word “right” when what they really mean is the word “privilege,” and vice versa. Furthermore, due to each term’s specific, unwavering definition, many illogical ideals commonly, and even unquestionably, coexist. For example, it is logically impossible to be a proponent of the death penalty, and at the same time maintain that life is right that we all inherently have. The only way that the death penalty can exist, and not also be egregiously unethical, is if life is not a right at all, but a privilege. Unfortunately, the same pro-death penalty individuals, who are logically and necessarily bound to the idea that life is a privilege, also often believe that the right to live begins at conception. This is an incompatible worldview and in desperate need of philosophical self-introspection and public examination, criticism, and perhaps even outrage. After all, there is almost nothing more frustrating than a person in power, like an elected official, with so much to be grateful for and holding so much sway over others’ lives, “talking out of his ass.” If politicians should be well-schooled in one thing, and one thing only, it’s rights theory! There is something ominously terrifying and dangerous about a Senator, governor, or judge who possesses contradictory conceptions about human rights.
So if you happen to be an individual who believes that life is, at the same time, a privilege in some circumstances but a right in others, you may need to come to terms with the fact that educated people will see you as being irrational and illogical. And how important is it to you to be viewed as a logical person instead of an obtuse, recalcitrant, and ignorant individual? (At this point you may be reflecting on the people in your life who meet these qualifications. God love you for being patient with them! You are a better human being than I!)