Skip to main content

The Logic Book: Critical Thinking: 7 - Informal Fallacies

The Logic Book: Critical Thinking
7 - Informal Fallacies
    • Notifications
    • Privacy
  • Project HomeThe Logic Book: Critical Thinking
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. 1 - What is Logic
  2. 2 - Recognizing Arguments
  3. 3 - Categorical Statements
  4. 4 - Categorical Syllogisms
  5. 5 - Propositional Logic Translations
  6. 6 - Natural Deduction
  7. 7 - Informal Fallacies
  8. 8 - Scientific Reasoning
  9. 9 - Analogical Reasoning
  10. 10 - Causal Reasoning
  11. 11 - Statistical Reasoning

Chapter 7: Informal Logical Fallacies[1]

I. Logical Fallacies: Formal and Informal

Generally and crudely speaking, a logical fallacy is just a bad argument. Bad, that is, in the logical sense of being incorrect—not bad in sense of being ineffective or unpersuasive. Alas, many fallacies are quite effective in persuading people; that is why they’re so common. Often, they’re not used mistakenly, but intentionally—to fool people, to get them to believe things that maybe they shouldn’t. The goal of this chapter is to develop the ability to recognize these bad arguments for what they are so as not to be persuaded by them.

There are formal and informal logical fallacies. The formal fallacies are simple: they’re just invalid deductive arguments. Consider the following:

If the Democrats retake Congress, then taxes will go up.

But the Democrats won’t retake Congress.

Therefore, taxes won’t go up.

This argument is invalid. It’s got an invalid form: If A then B; not A; therefore, not B. Any argument of this form is fallacious, an instance of “Denying the Antecedent.” We can leave it as an exercise for the reader to fill in propositions for A and B to get true premises and a false conclusion. Intuitively, it’s possible for that to happen: maybe a Republican Congress will raise taxes.

Our concern in this chapter is not with formal fallacies—deductive arguments that are bad because they have a bad form—but with informal fallacies. These arguments are bad, roughly, because of their content, their context, and/or their mode of delivery. Since we can’t judge inductive arguments based on their form, we need to learn a variety of ways in which inductive arguments fail. In other words, any time an inductive argument is weak instead of strong, that argument has committed a fallacy; fallacies are names given to common types of problematic reasoning.

There are a lot of different ways of defining and characterizing informal fallacies, and lot of different ways of organizing them into groups. Since Aristotle first did it in his Sophistical Refutations, authors of logic books have been defining and classifying the informal fallacies in various ways. These remarks are offered as a kind of disclaimer: the reader is warned that the particular presentation of the fallacies in this chapter will be unique and will disagree in various ways with other presentations, reflecting as it must the author’s own idiosyncratic interests, understanding, and estimation of what is important. This is as it should be and always is. The interested reader is encouraged to consult alternative sources for further edification.

II. Fallacies of Relevance

We will discuss six informal fallacies under this heading. What they all have in common is that they involve arguing in such a way that issue that’s supposed to be under discussion is somehow sidestepped, avoided, or ignored. These fallacies are called “Fallacies of Relevance” because they involve arguments that are bad insofar as the reasons given are logically irrelevant to the issue at hand. People who use these techniques with malicious intent are attempting to distract their audience from the central questions they’re supposed to be addressing, allowing them to appear to win an argument that they haven’t really engaged in.

Appeal to Emotion Fallacies

There are as many different appeal to emotion fallacies as there are human emotions. Fear is perhaps the most commonly exploited emotion for politicians. Political ads inevitably try to suggest to voters that one’s opponent will take away medical care or leave us vulnerable to terrorists, or some other scary outcome—usually without a whole lot in the way of substantive proof that these fears are at all reasonable. Commercial advertisers do it, too. Think of all the ads with sexy models schilling for cars or beers or whatever. What does sexiness have to do with how good a beer tastes? Nothing. The ads are trying to engage your emotions to get you thinking positively about their product.

Any time an arguer is trying to get you to accept their conclusion (buy their product, vote for their candidate) based on an emotion rather than on logic, an appeal to emotion fallacy has been committed. We usually name them according to which emotion the arguer is trying to stir up in the audience. We’ll cover three appeals to emotion here.

Appeal to Pity

Suppose you’re one of those sleazy personal injury lawyers—an “ambulance chaser.” You’ve got a client who was grocery shopping at Wal-Mart, and in the produce aisle she slipped on a grape that had fallen on the floor and injured herself. On the day of the trial, what do you do? How do you coach your client? Tell her to wear her nicest outfit, to look her best? Of course not! You wheel her into the courtroom in a wheelchair (whether she needs it or not); you put one of those foam neck braces on her, maybe give her an eye patch for good measure. You tell her to periodically emit moans of pain. When you’re summing up your case before the jury, you spend most of your time talking about the horrible suffering your client has undergone since the incident in the produce aisle: the hospital stays, the grueling physical therapy, the addiction to pain medications, etc., etc.

All of this is a classic fallacious appeal to emotion—specifically, in this case, pity. The people you’re trying to convince are the jurors. The conclusion you have to convince them of, presumably, is that Wal-Mart was negligent and hence legally liable in the matter of the grape on the floor. The details don’t matter, but there are specific conditions that have to be met—proved beyond a reasonable doubt—in order for the jury to find Wal-Mart guilty. But you’re not addressing those (probably because you can’t). Instead, you’re trying to distract the jury from the real issue by playing to their emotions. You’re trying to get them feeling sorry for your client, in the hopes that those emotions will cause them to bring in the verdict you want.

All appeal to pity fallacies work in the same way. Instead of presenting logical reasons to accept the conclusion, appeals to pity try to stir up your emotions of sympathy, feeling sorry for someone. Arguers using this technique are hoping you’ll make a decision with your heart, rather than your head.

Appeal to Force

Perhaps the least subtle of the fallacies is the appeal to force, in which you attempt to convince your interlocutor to believe something by threatening him. Threats pretty clearly distract one from the business of dispassionately appraising premises’ support for conclusions. It’s an appeal to emotion fallacy, as the arguer is trying to generate a sense of fear in the audience, with the hope that they’ll act out of fear. Like the appeal to pity, the appeal to force fallacy is often used when there is no good logical evidence for the conclusion, so the arguer tries to force the audience to accept the conclusion anyway.

There are many examples of this technique throughout history. In totalitarian regimes, there are often severe consequences for those who don’t toe the party line (see George Orwell’s 1984 for a vivid, though fictional, depiction of the phenomenon). The Catholic Church used this technique during the infamous Spanish Inquisition: the goal was to get non-believers to accept Christianity; the method was to torture them until they did.

An example from more recent history: in 2001, after 9/11, President George W. Bush gave a speech to congress that was broadcast. In it he addresses both the American people, and nations around the world—in part to convince them to join the war against terror. He said, “Every nation, in every region, now has a decision to make. Either you are with us, or you are with the terrorists.  From this day forward, any nation that continues to harbor or support terrorism will be regarded by the United States as a hostile regime.” Either you help us, or you will become our enemy.

The appeal to force is not usually subtle. But there is a very common, very effective debating technique that belongs under this heading, one that is a bit less overt than explicitly threatening someone who fails to share your opinions. It involves the sub-conscious, rather than conscious, perception of a threat.

Here’s what you do: during the course of a debate, make yourself physically imposing; sit up in your chair, move closer to your opponent, use hand gestures, like pointing right in their face; cut them off in the middle of a sentence, shout them down, be angry and combative. If you do these things, you’re likely to make your opponent very uncomfortable—physically and emotionally. They might start sweating a bit; their heart may beat a little faster. They’ll get flustered and maybe trip over their words. They may lose their train of thought; winning points they may have made in the debate will come out wrong or not at all. You’ll look like the more effective debater, and the audience’s perception will be that you made the better argument.

But you didn’t. You came off better because your opponent was uncomfortable. The discomfort was not caused by an actual threat of violence; on a conscious level, they never believed you were going to attack them physically. But you behaved in a way that triggered, at the sub-conscious level, the types of physical/emotional reactions that occur in the presence of an actual physical threat. This is the more subtle version of the appeal to force. It’s very effective and quite common (watch cable news talk shows and you’ll see it).

Argumentum ad Populum[2]

The Latin name of this fallacy literally means “argument to the people.” The emotion ad populum fallacies stir up is the human desire to be liked, loved, respected, admired—in short, to be popular. There is a group out there, and you’re not in it, you’re being left out.

Sometimes this group is “everyone,” and variety of ad populum is called the bandwagon appeal. Bandwagon arguments are an extremely common technique, especially for advertisers. They appeal to people’s underlying desire to fit in, to do what everybody else is doing, not to miss out. The advertisement assures us that a certain television show is #1 in the ratings—with the tacit conclusion being that we should be watching, too. But this is a fallacy. We’ve all known it’s a fallacy since we were little kids, the first time we did something wrong because all of our friends were doing it, too, and our moms asked us, “If all of your friends jumped off a bridge, would you do that too?”

Bandwagon appeals want you to join the crowd; but sometimes, an ad populum instead wants you to join a special, “elite” group of people. These arguments commit the appeal to snobbery variety of ad populum. It’s not that everyone is doing it, whatever “it” is, but it’s only done by people who are somehow better than everyone else. It’s appealing to your vanity, by convincing you that if you buy this product, or watch this television show, you too can be better than the crowd.

For example, there was a commercial for Grey Poupon mustard[3] back in the 1980s that shows a rich man in the back of a car eating an elaborate meal while his chauffeur drives him around. The chauffeur reaches into the glove box, and pulls out a jar of this mustard. Another fancy car pulls up next to them, and the man being driven around in that car asks “Pardon me, would you have any Grey Poupon?” “But of course” is the answer. Not everyone is using this mustard, only the elite.

Counterargument fallacies

There are three types of fallacies in this chapter that we’ll call “counterargument fallacies.” A counterargument is an attempt to respond to or refute someone else’s argument. There are perfectly logical ways to approach someone’s argument when you disagree, and of course, there are perfectly illogical ways as well. It is important to note that any fallacy can appear in a counterargument. These two fallacies, however, only appear in counterarguments. It is thus often relevant to examine the original argument before dissecting the counterargument to see why it fails. Our three counterargument fallacies are called the “straw man,” “ad hominem,” and “red herring.”

Straw Man

This fallacy involves the misrepresentation of an opponent’s viewpoint—an exaggeration or distortion of it that renders it indefensible, something nobody thinking logically would agree with. You make it look as though your opponent is arguing for something absurd, then declare that you don’t agree with his position—except it isn’t really his position, it’s one that you made up. You create your own version of their view, and then defeat the new creation instead of what your opponent actually said. Thus, you merely appear to defeat your opponent: your real opponent doesn’t hold the view you imputed to him; instead, you’ve defeated a distorted version of it, one of your own making, one that is easily dispatched. Instead of taking on the real man, you construct one out of straw, thrash it, and pretend to have achieved victory. It works if your audience doesn’t realize what you’ve done, if they believe that your opponent really holds the crazy view.

Politicians are frequently victims (and practitioners) of this tactic. After his 2005 State of the Union Address, President George W. Bush’s proposals were characterized thus:

George W. Bush's State of the Union Address, masked in talk of “freedom" and “democracy,” was an outline of a brutal agenda of endless war, global empire, and the destruction of what remains of basic social services.[4]

Well, who’s not against “endless war” and “destruction of basic social services”? But of course this characterization is a gross exaggeration of what was actually said in the speech, in which Bush declared that we must “confront regimes that continue to harbor terrorists and pursue weapons of mass murder” and rolled out his proposal for privatization of Social Security accounts. Whatever you think of those actual policies, you need to do more to undermine them than to mis-characterize them as “endless war” and “destruction of social services.” That’s distracting your audience from the real substance of the issues.

In 2009, during the (interminable) debate over President Obama’s healthcare reform bill—the Patient Protection and Affordable Care Act—former vice-presidential candidate Sarah Palin took to Facebook to denounce the bill thus:

The America I know and love is not one in which my parents or my baby with Down Syndrome will have to stand in front of Obama's “death panel” so his bureaucrats can decide, based on a subjective judgment of their “level of productivity in society,” whether they are worthy of health care. Such a system is downright evil.

Yikes! That sounds like the evilest bill in the history of evil! Bureaucrats euthanizing Down Syndrome babies and their grandparents? Holy Cow. ‘Death panel’ and ‘level of productivity in society’ are even in quotes. Did she pull those phrases from the text of the bill?

Of course she didn’t. This is a complete distortion of what’s actually in the bill (the kernel of truth behind the “death panels” thing seems to be a provision in the Act calling for Medicare to fund doctor-patient conversations about end-of-life care); the non-partisan fact-checking outfit Politifact named it their “Lie of the Year” in 2009. Palin is not taking on the bill or the president themselves; she’s confronting a made-up version, defeating it (which is easy, because the made-up bill is evil as heck), and pretending to have won the debate. But this distraction only works if her audience believes her straw man is the real thing. Alas, many did, and the provision funding end-of-life care was taken out of the bill. This is why these fallacies are used so frequently: they often work.

Argumentum ad Hominem

Like ad populum fallacies, everybody always uses the Latin for this one—usually shortened to just ‘ad hominem’, which means ‘at the person.’ You commit this fallacy when, instead of attacking your opponent’s views, you attack your opponent himself.

This fallacy comes in a lot of different forms; there are a lot of different ways to attack a person while ignoring (or downplaying) their actual arguments. To organize things a bit, we’ll divide the various ad hominem attacks into two groups: Abusive and Circumstantial.

1. Abusive

Abusive ad hominem is the more straightforward of the two. The simplest version is simply calling your opponent names instead of debating him. During the 2016 Republican presidential primary, Donald Trump came up with catchy little nicknames for his opponents, which he used just about every time he referred to them. If you pepper your descriptions of your opponent with tendentious, unflattering, politically charged language, you can get a rhetorical leg-up.

Mind you, simply insulting someone is not a logical fallacy. It becomes a fallacy when you try to dismiss their argument altogether, or get other people to dismiss their arguments, based on the insults you’re hurling. Trump used the nicknames in order to try to discredit his opponents so their arguments would have less of an effect.

Another abusive ad hominem attack is “guilt by association.” Here, instead of directly insulting your opponent, you tarnish your opponent by associating them or their views with someone or something that your audience despises. Consider the following:

Former Vice President Dick Cheney was an advocate of a strong version of the so-called Unitary Executive interpretation of the Constitution, according to which the president’s control over the executive branch of government is quite firm and far-reaching. The effect of this is to concentrate a tremendous amount of power in the Chief Executive, such that those powers arguably eclipse those of the supposedly co-equal Legislative and Judicial branches of government. You know who else was in favor of a very strong, powerful Chief Executive? That’s right, Hitler.

We just compared Dick Cheney to Hitler. Ouch. Nobody likes Hitler, so. Not every comparison like this is fallacious, of course. But in this case, where the connection is particularly flimsy, we’re clearly pulling a fast one.[5]

2. Circumstantial

The circumstantial ad hominem fallacy is not as blunt an instrument as its abusive counterpart. It also involves attacking one’s opponent, focusing on some aspect of his person—their circumstances—as the core of the criticism. Specifically, you’re trying to dismiss their argument by saying they have something to gain. You’re questioning their motives in making the argument in the first place

To see what we’re talking about, consider this argument:

A recent study from scientists at the University of Minnesota claims to show that glyphosate—the main active ingredient in the widely used herbicide Roundup—is safe for humans to use. But guess whose business school just got a huge donation from Monsanto, the company that produces Roundup? That’s right, the University of Minnesota. Ever hear of conflict of interest? This study is junk, just like the product it’s defending.

This is a fallacy. It doesn’t follow from the fact that the University received a grant from Monsanto that scientists working at that school faked the results of a study. The fact of the grant does raise a red flag; there may be some conflict of interest at play. But raising the possibility of a conflict is not enough, on its own, to show that the study in question can be dismissed out of hand. It may be appropriate to subject it to heightened scrutiny, but we cannot shirk our duty to assess its arguments on their merits.

Most people argue for conclusions they like and that might benefit them in some way; what positive effect they personally want does not determine how good—or how bad—their actual argument may be.

3. Tu quoque

This type of ad hominem involves pointing out one’s opponent’s hypocrisy. Its Latin name, “tu quoque,” translates roughly as “you, too.” This is the “I know you are but what am I?” and “look who’s talking” fallacy. It’s a technique used in very specific circumstances: your opponent accuses you of doing or advocating something that’s wrong, and, instead of making an argument to defend the rightness of your actions, you simply throw the accusation back in your opponent’s face—they did it too. However, just because they’re being hypocritical does not make the action in question right, and it does not mean the argument they’re making is bad.

An example. In February 2016, Supreme Court Justice Antonin Scalia died unexpectedly. President Obama, as is his constitutional duty, nominated a successor. The Senate is supposed to ‘advise and consent’ (or not consent) to such nominations, but instead of holding hearings on the nominee (Merrick Garland), the Republican leaders of the Senate declared that they wouldn’t even consider the nomination. Since the presidential primary season had already begun, they reasoned, they should wait until the voters had spoken and allow the new president to make a nomination. Democrats objected strenuously, arguing that the Republicans were shirking their constitutional duty. The response was classic tu quoque. A conservative writer asked, “Does any sentient human being believe that if the Democrats had the Senate majority in the final year of a conservative president’s second term—and Justice [Ruth Bader] Ginsburg’s seat came open—they would approve any nominee from that president?”[6] Senate Majority Leader Mitch McConnell said that he was merely following the “Biden Rule,” a principle advocated by Vice President Joe Biden when he was a Senator, back in the election year of 1992, that then-President Bush should wait until after the election season was over before appointing a new Justice (the rule was hypothetical; there was no Supreme Court vacancy at the time).

This is a fallacious argument. Whether or not Democrats would do the same thing if the circumstances were reversed is irrelevant to determining whether that’s the right, constitutional thing to do.

Red Herring

One final fallacy of relevance, the red herring. Red herring fallacies aren’t trying to get you to accept their conclusion—they instead attempt to distract from the issue altogether.

This fallacy gets its name from the actual fish, though there’s some debate about exactly how. Here’s one story that’s told: when herring are smoked, they turn red and are quite pungent. Stinky things can be used to distract hunting dogs, who of course follow the trail of their quarry by scent; if you pass over that trail with a stinky fish and run off in a different direction, the hound may be distracted and follow the wrong trail. Whether or not this practice was ever used to train hunting dogs, as some suppose, the connection to logic and argumentation is clear. One commits the red herring fallacy when one attempts to distract one’s audience from the main thread of an argument, taking things off in a different direction. The diversion is often subtle, with the detour starting on a topic closely related to the original—but gradually wandering off into unrelated territory. The tactic is often (but not always) intentional: one commits the red herring fallacy because one is not comfortable arguing about a particular topic on the merits, often because one’s case is weak; so instead, the arguer changes the subject to an issue about which they feel more confident, making strong points on the new topic, and pretending to have won the original argument.[7]

A fictional example can illustrate the technique. Consider Frank, who, after a hard day at work, heads to the tavern to unwind. He has far too much to drink, and, unwisely, decides to drive home. Well, he’s swerving all over the road, and he gets pulled over by the police. Let’s suppose that Frank has been pulled over in a posh suburb where there’s not a lot of crime. When the police officer tells him he’s going to be arrested for drunk driving, Frank becomes belligerent:

Where do you get off? You’re barely even real cops out here in the ’burbs. All you do is sit around all day and pull people over for speeding and stuff. Why don’t you go investigate some real crimes? There’s probably some unsolved murders in the inner city they could use some help with. Why do you have to bother a hard-working citizen like me who just wants to go home and go to bed?

Frank is committing the red herring fallacy (and not very subtly). The issue at hand is whether or not he deserves to be arrested for driving drunk. He clearly does. Frank is not comfortable arguing against that position on the merits. So he changes the subject—to one about which he feels like he can score some debating points. He talks about the police out here in the suburbs, who, not having much serious crime to deal with, spend most of their time issuing traffic violations. Yes, maybe that’s not as taxing a job as policing in the city. Sure, there are lots of serious crimes in other jurisdictions that go unsolved. But that’s beside the point! It’s a distraction from the real issue of whether Frank should get a DUI.

Politicians use the red herring fallacy all the time. Consider a debate about Social Security—a retirement stipend paid to all workers, financed by a dedicated payroll tax. Suppose a politician makes the following argument:

We need to cut Social Security benefits, raise the retirement age, or both. As the baby boom generation reaches retirement age, the amount of money set aside for their benefits will not be enough cover them while ensuring the same standard of living for future generations when they retire. The status quo will put enormous strains on the federal budget going forward, and we are already dealing with large, economically dangerous budget deficits now. We must reform Social Security.

Now imagine an opponent of the proposed reforms offering the following reply:

Social Security is a sacred trust, instituted during the Great Depression by FDR to ensure that no hard-working American would have to spend their retirement years in poverty. I stand by that principle. Every citizen deserves a dignified retirement. Social Security is a more important part of that than ever these days, since the downturn in the stock market has left many retirees with very little investment income to supplement government support.

The second speaker makes some good points, but notice that they do not speak to the assertion made by the first: Social Security is economically unsustainable in its current form. It’s possible to address that point head on, either by making the case that in fact the economic problems are exaggerated or non-existent, or by making the case that a tax increase could fix the problems. The respondent does neither of those things, though; he changes the subject, and talks about the importance of dignity in retirement. I’m sure he’s more comfortable talking about that subject than the economic questions raised by the first speaker, but it’s a distraction from that issue—a red herring.

Perhaps the most blatant kind of red herring is evasive: used especially by politicians, this is the refusal to answer a direct question by changing the subject. Examples are almost too numerous to cite; to some degree, no politician ever answers difficult questions straightforwardly (there’s an old axiom in politics, put nicely by Robert McNamara: “Never answer the question that is asked of you. Answer the question that you wish had been asked of you.”).

A particularly egregious example of this occurred in 2009 on CNN’s Larry King Live. Michele Bachmann, Republican Congresswoman from Minnesota, was the guest. The topic was “birtherism,” the (false) belief among some that Barack Obama was not in fact born in America and was therefore not constitutionally eligible for the presidency. After playing a clip of Senator Lindsey Graham (R, South Carolina) denouncing the myth and those who spread it, King asked Bachmann whether she agreed with Senator Graham. She responded thus:

You know, it's so interesting, this whole birther issue hasn't even been one that's ever been brought up to me by my constituents. They continually ask me, where's the jobs? That's what they want to know, where are the jobs?

Bachmann doesn’t want to respond directly to the question. If she outright declares that the “birthers” are right, she’s endorsing a clearly false belief. But if she denounces them, she alienates a lot of her potential voters who believe the falsehood. Tough bind. So, she blatantly, and rather desperately, tries to change the subject. Jobs! Let’s talk about those instead. Please?

III. Fallacies of Weak Induction

As their name suggests, what these fallacies have in common is that they are bad—that is, weak— inductive arguments. Recall, inductive arguments attempt to provide premises that make their conclusions more probable. We evaluate them according to how probable their conclusions are in light of their premises: the more probable the conclusion (given the premises), the stronger the argument; the less probable, the weaker. The fallacies of weak induction are arguments whose premises do not make their conclusions very probable—but that are nevertheless often successful in convincing people of their conclusions. We will discuss five informal fallacies that fall under this heading.

Argument from Ignorance

In essence, an argument from ignorance is an inference from premises that directly or implicitly state there’s a lack of knowledge about some topic, to a definite conclusion about that topic. We don’t know; therefore, we know!

Of course, put that baldly, it’s plainly absurd; actual instances are more subtle. The fallacy comes in a variety of closely related forms. It will be helpful to state them in bald/absurd schematic fashion first, then elucidate with more subtle real-life examples.

The first form can be put like this:

Nobody knows how to explain phenomenon X.

Therefore, my theory about X is true.

That sounds silly, but consider an example: those “documentary” programs on cable TV about aliens. You know, the ones where they suggest that extraterrestrials built the pyramids or something (there are books and websites, too). How do they get you to believe that crazy theory? By creating mystery! By pointing to facts that nobody can explain. The Great Pyramid at Giza is aligned (almost) exactly with the magnetic north pole! On the day of the summer solstice, the sun sets exactly between two of the pyramids! The height of the Great Pyramid is (almost) exactly one one-millionth the distance from the Earth to the Sun! How could the ancient Egyptians have such sophisticated astronomical and geometrical knowledge? Why did the Egyptians, careful record- keepers in (most) other respects, (apparently) not keep detailed records of the construction of the pyramids? Nobody knows. Conclusion: aliens built the pyramids.

In other words, there are all sorts of (sort of) surprising facts about the pyramids, and nobody knows how to explain them. From these premises, which establish only our ignorance, we’re encouraged to conclude that we know something: aliens built the pyramids. That’s quite a leap— too much of a leap.

Another form this fallacy takes can be put crudely thus:

Nobody can PROVE that I’m wrong.

Therefore, I’m right.

The word ‘prove’ is in all-caps because stressing it is the key to this fallacious argument: the standard of proof is set impossibly high, so that almost no amount of evidence would constitute a refutation of the conclusion.

An example will help. There are lots of people who claim that evolutionary biology is a lie: there’s no such thing as evolution by natural selection, and it’s especially false to claim that humans evolved from earlier species, that we share a common ancestor with apes. Rather, the story goes, the Bible is literally true: the Earth is only about 6,000 years old, and humans were created as-is by God just as the Book of Genesis describes. The Argument from Ignorance is one of the favored techniques of proponents of this view. They are especially fond of pointing to “gaps” in the fossil record—the so-called “missing link” between humans and a pre-human, ape-like species—and claim that the incompleteness of the fossil record vindicates their position.

But this argument is an instance of the fallacy. The standard of proof—a complete fossil record without any gaps—is impossibly high. Evolution has been going on for a LONG time (the Earth is actually about 4.5 billion years old, and living things have been around for at least 3.5 billion years). So many species have appeared and disappeared over time that it’s absurd to think that we could even come close to collecting fossilized remains of anything but the tiniest fraction of them. It’s hard to become a fossil, after all: a creature has to die under special circumstances to even have a chance for its remains to do anything than turn into compost. And we haven’t been searching for fossils in a systematic way for very long (only since the mid-1800s or so). It’s no surprise that there are gaps in the fossil record, then. What’s surprising, in fact, is that we have as rich a fossil record as we do. Many, many transitional species have been discovered, both between humans and their ape-like ancestors, and between other modern species and their distant forbears (whales used to be land-based creatures, for example; we know this (in part) from the fossils of early proto- whale species with longer and longer rear hip- and leg-bones).

We will never have a fossil record complete enough to satisfy skeptics of evolution. But their standard is unreasonably high, so their argument is fallacious. Sometimes they put it even more simply: nobody was around to witness evolution in action; therefore, it didn’t happen. This is patently absurd, but it follows the same pattern: an unreasonable standard of proof (witnesses to evolution in action; impossible, since it takes place over such a long period of time),[8] followed by the leap to the unwarranted conclusion.

One final note on this fallacy: it’s common for people to mislabel certain bad arguments as arguments from ignorance; namely, arguments made by people who obviously don’t know what they’re talking about. People who are confused or ignorant about the subject on which they’re offering an opinion are liable to make bad arguments, but the fact of their ignorance is not enough to label those arguments as instances of the fallacy. We reserve that designation for arguments that take the forms canvassed above: those that rely on ignorance—and not just that of the arguer, but of the audience as well—as a premise to support the conclusion.

Appeal to Unqualified Authority

One way of making an inductive argument—of lending more credence to your conclusion—is to point to the fact that some relevant authority figure agrees with you. In law, for example, this kind of argument is indispensable: appeal to precedent (Supreme Court rulings, etc.) is the attorney’s bread and butter. And in other contexts, this kind of move can make for a strong inductive argument. If I’m trying to convince you that fluoridated drinking water is safe and beneficial, I can point to the Center for Disease Control, where a wealth of information supporting that claim can be found.[9] Those people are scientists and doctors who study this stuff for a living; they know what they’re talking about.

One commits the fallacy when one points to the testimony of someone who’s not a reliable authority on the issue at hand. There are several things that disqualify someone from being a reliable authority; the most blatant form of this is when the person being relied upon is not an expert in that subject at all. This is a favorite technique of advertisers. We’ve all seen celebrity endorsements of various products. In the 2000s, Tiger Woods was in commercials selling Buicks. Tiger Woods is an expert at golf, but not Buicks.

Usually, the inappropriateness of the authority being appealed to is obvious, but sometimes it isn’t. A particularly subtle example is AstraZeneca’s hiring of Dr. Phil McGraw in 2016 as a spokesperson for their diabetes outreach campaign. AstraZeneca is a drug manufacturing company. They make a diabetes drug called Bydureon. The aim of the outreach campaign, ostensibly, is to increase awareness among the public about diabetes; but of course the real aim is to sell more Bydureon. A celebrity like Dr. Phil can help. Is he an appropriate authority? That’s a hard question to answer. It’s true that Dr. Phil had suffered from diabetes himself for 25 years, and that he personally takes the medication. So that’s a mark in his favor, authority-wise. But is that enough? We’ll talk about how feeble Phil’s sort of anecdotal evidence is in supporting general claims (in this case, about a drug’s effectiveness) when we discuss the hasty generalization fallacy; suffice it to say, one person’s positive experience doesn’t prove that the drug is effective. But, Dr. Phil isn’t just a person who suffers from diabetes; he’s a doctor! It’s right there in his name (everybody always simply refers to him as ‘Dr. Phil’). Surely that makes him an appropriate authority on the question of drug effectiveness. Or maybe not. Phil McGraw is not a medical doctor; he’s a PhD. He has a doctorate in Psychology. He’s not a licensed psychologist; he cannot legally prescribe medication. He has no relevant professional expertise about drugs and their effectiveness. He is not a qualified medical authority in this case. He looks like one, though, which makes this a very sneaky, but effective, advertising campaign.

Of course, even a qualified expert in the relevant subject should be doubted if they appear in an ad. Not being an expert disqualifies someone, but having reasons to distort the truth also disqualifies someone from being a reliable expert. Humans lie and distort the truth for many reason—to protect themselves or someone they love, to get a job or promotion or get out of trouble, and, of course, money is a huge motivation to distort the truth. Michael Jordan, a very talented basketball player, has spent years selling athletic shoes. Even though he’s not a scientist comparing the objective qualities of one shoe versus another, he is a man who has spent decades relying on a good pair of shoes to get him from one side of the court to the other. This qualifies him to at least have an opinion we should listen to. However, the fact that he’s making a great deal of money from selling a particular brand of shoes disqualifies him from being a reliable expert.

A third thing disqualifies a person from being a reliable expert, and that’s if there’s reason to believe they somehow can’t perceive the facts clearly. In court, eye witnesses to a crime are considered experts in what happened; it’s up to the lawyers to determine whether they’re reliable or not. We see this in a movie called Twelve Angry Men, a famous courtroom drama from 1957. In the movie, a young man is accused of killing his father, and there are two witnesses who testify, one who heard the crime from the apartment below, and one who saw the crime from across the street through the windows of a passing train. Eleven jurors vote the young man guilty, but one votes not guilty, because there is reasonable doubt. A passing train would make it difficult to hear the crime from another apartment, and the woman who saw the crime not only did so through the windows of the train instead of getting a clear view – but also, as it turns out, she was not wearing her glasses. The inability to clearly perceive the crime disqualifies both jurors from being reliable witnesses. Eleven jurors made the appeal to unqualified authority fallacy.

A final note on this fallacy. Remember that ad hominem fallacies are when you turn your attention to the arguer, rather than their argument, and try to dismiss the argument based on some personal fact about themselves. If you are evaluating someone’s argument, you shouldn’t pay attention to the arguer’s personal circumstances, perceived failings, whether they just have something to gain, etc. However, if you’re trusting someone as an expert, instead of evaluating their argument, you do need to turn your attention to them personally, to judge if they’re trustworthy. So if you accuse the lawyer of arguing the young man is guilty just because she’s being paid to make that argument, you’re making an ad hominem circumstantial fallacy. Of course lawyers get paid to do their jobs; it’s the jury’s job to figure out if the argument they gave was good or not. On the other hand, if you accuse the witness of saying the young man is guilty just because he’s being paid to say that, this isn’t an ad hominem fallacy. The witness is not giving an argument to evaluate, he’s stating what he supposedly witnessed, and if someone is bribing him to say that, he’s disqualified from being a reliable authority.

False Cause Fallacies

False cause fallacies are mistakes humans make in causal reasoning. There are many, but I want to introduce three: post hoc ergo propter hoc, non causa pro causa, and oversimplified cause. These are the fallacies we’re trying to avoid by using Mill’s Methods to provide evidence for (or against) our causal assumptions. The first two completely misunderstand a causal link; the third one is partially correct about a causal link, but in a way that is often misleading.

1. Post Hoc Ergo Propter Hoc

This is Latin for “after this, therefore because of this.” It is often shortened to just “post hoc.” The post hoc fallacy is where we see causation where there is none, because one thing happened right after another one. Usually, there are two significant events that caught our attention, and our brain forms a causal link between them. (If two events are insignificant, we usually don’t leap to a causal connection. This morning I brushed my teeth and then a little later, I put my shoes on. I had to stop and think what I did after I brushed my teeth; neither of these events are notable, so I don’t even think about them. I certainly don’t think that brushing my teeth caused me to put shoes on.)

Here are a few examples of the post hoc fallacy.

In 2013, Beyoncé played the half time show during the Super Bowl. It was a spectacular show, with great music, dance routines, and special effects. About twenty minutes into the second half of the game, the power went out in the Superdome. It took about 40 minutes before the power was fully restored. Fortunately, there was enough power that the commenters could talk about how the lights were still off…

Of course the immediate assumption was that Beyoncé’s show caused the power outage. (It did not. Beyoncé’s crew brought their own generators, they didn’t even use the Superdome’s power).

See the “after this, therefore because of this” structure? Beyoncé played, and then the lights went out. So people (falsely) assumed that her show caused the lights to go out. Here’s another one:

The governor of California gave a speech, and then an earthquake hit Los Angeles. He needs to stop giving speeches; he’s putting people in danger!

He spoke, and then the earthquake hit, so the assumption is he caused the earthquake, maybe by bringing bad luck to the residents of Los Angeles. Superstitions are often reinforced by the post hoc fallacy. If a black cat crosses my path, and then I get in a car wreck, I will assume that the black cat crossing my path caused my bad luck.

2. Non Causa Pro Causa

The Latin here translates to “not the cause for the effect.” Just like with post hoc, I have completely misidentified a causal link. However, this time it’s not because one event happened before another one, it’s because the two things happen at the same time, or in the same place, or I notice some other correlation between them, and use this correlation to assume there’s a causal link. As the saying goes, correlation is not causation. Here are a few examples.

Every time the governor of California gives a speech, a natural disaster happens somewhere in the world. He needs to stop giving speeches; he’s putting people in danger!

Notice how this is different from the example in the last section. The post hoc had two events, that happened one right after the other. The non causa finds a correlation – these two things keep happening at the same time.

Now, correlation can give you a sense that two things might be causally linked. If every venue Beyonce ever played in suffered a power outage, I’d start to suspect her show did have something to do with it. But simply noticing a correlation is not proof.

Here’s another example:

A line graph, with one line representing "per capita consumption of cheese (US)," and the other line representing "Number of people who died by becoming tangled in their bedsheets." The two lines are very closely correlated.

This is from Tyler Vigen’s web page; the link is here (it opens a new page): Spurious Correlations. If we drew from this chart the conclusion that eating cheese causes people to die by becoming tangled in their bedsheets (or that people dying in their bedsheets is causing more people to eat cheese), we are making the non causa fallacy. And, a famous example:

Every time ice cream sales go up, so do shark attacks! Sharks must really like the taste of ice-cream stuffed humans.

This is an interesting one, because there is actually sort of an indirect a causal relationship between ice cream sales and shark attacks—they both share a cause. Buying (and presumably eating) ice cream does not cause sharks to attack; shark attacks do not cause a rise in ice cream sales. Rather, hot weather is responsible for both ice cream sales rising and for humans going swimming in shark-infested water. [10]

3. Oversimplified cause

The fallacy of oversimplified cause is where you’re dealing with a complex phenomenon with many moving parts, but you pick a partial cause and pin the full weight of blame (or praise) on that one thing. It doesn’t completely misidentify a causal connection, like post hoc and non causa; the thing you’ve picked out does play a causal role, but you’ve oversimplified by ignoring all of the other parts of the equation. Here’s an example:

The economy of California is thriving. It must be because of the governor’s wise economic policies.

Or

The economy of California is crashing. It must be because of the governor’s idiotic economic policies.

The truth is that the governor’s economic policies do have an effect on the economy of the state. However, one person is never solely responsible for an entire state’s economy. Further, economic policies never have an immediate effect. If government policies do partially cause a change in the economy, we’re unlikely to see that change for several years – usually during the next governor’s term of office. This is the fallacy of oversimplified cause.

Slippery Slope

Like the false cause fallacies, the slippery slope fallacy is a weak inductive argument to a conclusion about causation. This fallacy involves making an insufficiently supported claim that a certain action or event will set off an unstoppable causal chain-reaction—putting us on a slippery slope— leading to some disastrous effect.

This style of argument was a favorite tactic of religious conservatives who opposed gay marriage. They claimed that legalizing same-sex marriage would put the nation on a slippery slope to disaster. Famous Christian leader Pat Robertson, on his television program The 700 Club, puts the case nicely. When asked about gay marriage, he responded with this:

We haven’t taken this to its ultimate conclusion. You’ve got polygamy out there. How can we rule that polygamy is illegal when you say that homosexual marriage is legal? What is it about polygamy that’s different? Well, polygamy was outlawed because it was considered immoral according to Biblical standards. But if we take Biblical standards away in homosexuality, well what about the other? […]You mark my words, this is just the beginning of a long downward slide in relation to all the things that we consider to be abhorrent.

This is a classic slippery slope fallacy; he even uses the phrase ‘long downward slide’! The claim is that allowing gay marriage will force us to decriminalize polygamy, and ultimately, “all the things that we consider to be abhorrent.” Yikes! That’s a lot of things. Apparently, gay marriage will lead to utter anarchy.

There are genuine unstoppable causal chain-reactions out there—but this isn’t one of them. The mark of the slippery slope fallacy is the assertion that the chain can’t be stopped, with reasons that are insufficient to back up that assertion. In this case, Pat Robertson has given us the abandonment of “Biblical standards” as the lubrication for the slippery slope. This is obviously insufficient. Biblical standards are expressly forbidden, by the “establishment clause” of the First Amendment to the U.S. Constitution, from forming the basis of the legal code. The slope is not slippery. As recent history has shown, the legalization of same sex marriage did not lead to the legalization of polygamy; the argument is fallacious.

Fallacious slippery slope arguments have long been deployed to resist social change. Those opposed to the abolition of slavery warned of economic collapse and social chaos. Those who opposed women’s suffrage asserted that it would lead to the dissolution of the family, rampant sexual promiscuity, and social anarchy. Of course none of these dire predictions came true; the slopes simply weren’t slippery.

Hasty Generalization

Many inductive arguments involve an inference from particular premises to a general conclusion; this is a generalization. For example, if you make a bunch of observations every morning that the sun rises in the east, and conclude on that basis that, in general, the sun always rises in the east, this is a generalization. And it’s a good one! With all those particular sunrise observations as premises, your conclusion that the sun always rises in the east has a lot of support; that’s a strong inductive argument.

The data you rely on when generalizing is called your “sample.” The sample of the above generalization is the collection of all the times I saw the sun rise in the east. One commits the hasty generalization fallacy when one makes this kind of inference based on sample that is too small, not sufficiently random, or in some way biased. A random sample is one where every member of the group has an equal chance of being selected into your sample; random samples that are large enough have a pretty good chance of being a fair representation of the population.[11] Bias refers to anything which slants your data. Suppose I want to know who will win the next presidential election in the United States, so I take a survey. If I ask two thousand people who they intend to vote for, and all two thousand are from Massachusetts, this sample is not random, and it’s also biased, since Massachusetts is a blue state—you’ve biased your survey in favor of the Democratic candidate.

People who deny that global warming is a genuine phenomenon often commit the hasty generalization fallacy. In February of 2015, the weather was unusually cold in Washington, DC. Senator James Inhofe of Oklahoma famously took to the Senate floor wielding a snowball. “In case we have forgotten, because we keep hearing that 2014 has been the warmest year on record, I ask the chair, ‘You know what this is?’ It’s a snowball, from outside here. So it’s very, very cold out. Very unseasonable.” He then tossed the snowball at his colleague, Senator Bill Cassidy of Louisiana, who was presiding over the debate, saying, “Catch this.”

Senator Inhofe commits the hasty generalization fallacy. He’s trying to establish a general conclusion—that 2014 wasn’t the warmest year on record, or that global warming isn’t really happening (he’s on the record that he considers it a “hoax”). But the evidence he presents is insufficient to support such a claim. His evidence is an unseasonable coldness in a single place on the planet, on a single day. We can’t derive from that any conclusions about what’s happening, temperature-wise, on the entire planet, over a long period of time. That the earth is warming is not a claim that everywhere, at every time, it will always be warmer than it was; the claim is that, on average, across the globe, temperatures are rising. This is compatible with a couple of cold snaps in the nation’s capital.

Many people are susceptible to hasty generalizations in their everyday lives. When we rely on anecdotal evidence to make decisions, we commit the fallacy. Suppose you’re thinking of buying a new car, and you’re considering a Subaru. Your neighbor has a Subaru. So what do you do? You ask your neighbor how he likes his Subaru. He tells you it runs great, hasn’t given him any trouble. You then, fallaciously, conclude that Subarus must all be terrific cars. But one person’s testimony isn’t enough to justify that conclusion; you’d need to look at many, many more drivers’ experiences to reach such a conclusion (this is why the magazine Consumer Reports is so useful).

A particularly pernicious instantiation of the Hasty Generalization fallacy is the development of negative stereotypes. People often make general claims about religious or racial groups, ethnicities and nationalities, based on very little experience with them. If you once got mugged by a Puerto Rican, that’s not a good reason to think that, in general, Puerto Ricans are crooks. If a waiter at a restaurant in Paris was snooty, that’s no reason to think that French people are stuck up. And yet we see this sort of faulty reasoning all the time.

Weak Analogy

A final fallacy of weak induction is the Weak Analogy fallacy. Arguments by Analogy are a fundamental form of inductive reasoning. An argument by analogy draws on a conclusion based on a comparison between two or more things. It’s the form of reasoning that lets us see patterns and extend those patterns to new situations. Many forms of inductive arguments have at least an element of analogy in them. For example, think of the following prediction:

On the last 100 days with weather conditions like today, it rained in the evening. Therefore, it will rain this evening.

This is a logical prediction – using data from the past to draw a conclusion about the future. But there’s an element of analogy here, too. The reason I feel confident predicting the future here is because I’m looking at days similar to today – I’m comparing today to past days with specific similarities. Because all 101 days (the 100 in the past and today) share similar weather conditions, we predict they will share similar precipitation as well.

Or, consider a generalization:

Of the 2,000 Americans polled, 55% of them said they prefer Pepsi to Coke. Therefore, 55% of Americans prefer Pepsi to Coke.[12]

You learned above that in order to be a good generalization, your sample needs to be large enough, free of bias, and random, or representative. A random sample is one where every single member of the population has an equal chance of being included in the sample. A representative sample is one where the researcher purposefully constructs a sample that has the same proportion of key demographic groups as the population at large. Both random and representative samples are methods of making sure your sample is like the population at large in key ways. The generalization only works if we can draw a firm analogy between the sample and the population.

The basic argument by analogy compares two things, and based on their similarities, transfers a property from one of those things to the other. For example:

Stacy’s house has 900 square feet and is all one level. Her electric bill is $150 a month. If I buy that 910 square feet house in her neighborhood, which is also one level, I can expect my electric bill to be about the same.

Her house is similar to the one I’m considering buying: same neighborhood, same number of floors, same square footage. Given these similarities, I have a reasonable chance of having a similar electric bill.

A good argument by analogy must compare things that are not only similar to each other, but are similar to each other in relevant ways, and there needs to be no significant differences between the two cases.

So, the fallacy of weak analogy is one where the similarities between the two things being compared do not warrant the conclusion. Either they’re just not very similar, or their similarities have little to no relevance to the conclusion you’re trying to draw, or you’ve discovered a relevant difference between the two things being compared. Consider the following argument:

Stacy’s house has 900 square feet and is all one level. She has a great kitchen. If I buy that 910 square feet house in her neighborhood, which is also one level, I can expect it to also have a great kitchen.

This argument commits the fallacy of weak analogy. Square footage, how many floors your house has, and neighborhood, do have an impact on electric bills, because it will take a similar amount of electricity to heat and cool the place. These similarities, however, have nothing to do with whether a house has a good kitchen or not. These similarities do not warrant the conclusion.

Or, consider this variation:

Stacy’s house has 900 square feet and is all one level. Her electric bill is $150 a month. She’s single and works out of the house; I have three kids and work from home. If I buy that 910 square feet house in her neighborhood, which is also one level, I can expect my electric bill to be about the same.

The similarities are still relevant, but I’ve now discovered a significant difference. How many people are using devices, lights, and so on has a big impact on electric bills. So does the fact that her house will be unoccupied (and hopefully then with lights and devices turned off) for a lot of the day, where mine will be occupied more often. So, this too commits the fallacy of weak analogy.

IV. Fallacies of Illicit Presumption

This is a family of fallacies whose common characteristic is that they (often tacitly, implicitly) presume the truth of some claim that they’re not entitled to. They are arguments with a premise (again, often hidden) that is assumed to be true, but is actually a controversial claim, which at best requires support that’s not provided, which at worst is simply false. We will look at six fallacies under this heading.

Accident

This fallacy is the reverse of the hasty generalization. That was a fallacious inference from insufficient particular premises to a general conclusion; accident is a fallacious inference from a general premise to a particular conclusion. What makes it fallacious is an illicit presumption: the general rule in the premise is assumed, incorrectly, not to have any exceptions; the particular conclusion fallaciously inferred is one of the exceptional cases.

Here’s a simple example to help make that clear:

Cutting people with knives is illegal.

Surgeons cut people with knives.

Therefore, surgeons should be arrested.

One of the premises is the general claim that cutting people with knives is illegal. While this is true in almost all cases, there are exceptions—surgery among them. We pay surgeons lots of money to cut people with knives! It is therefore fallacious to conclude that surgeons should be arrested, since they are an exception to the general rule. The inference only goes through if we presume, incorrectly, that the rule is exceptionless.

Another example.

Suppose I volunteer at my first-grade daughter’s school; I go in to her class one day to read a book aloud to the children. As I’m sitting down on the floor with the kiddies, criss-cross applesauce, as they say, I realize that I can’t comfortably sit that way because of the .44 Magnum revolver that I have tucked into my waistband.[13] So I remove the piece from my pants and set it down on the floor in front of me, among the circled-up children. The teacher screams and calls the office, the police are summoned, and I’m arrested. As they’re hauling me out of the room, I protest: “The Second Amendment to the Constitution guarantees my right to keep and bear arms! This state has a ‘concealed carry’ law, and I have a license to carry that gun! Let me go!”

I’m committing the fallacy of Accident in this story. True, the Second Amendment guarantees the right to keep and bear arms; but that rule is not without exceptions. Similarly, concealed carry laws also have exceptions—among them being a prohibition on carrying weapons into elementary schools. My insistence on being released only makes sense if we presume, incorrectly, that the legal rules I’m citing are without exception.

One more example from real life:

After the financial crisis in 2008, the Federal Reserve—the central bank in the United States, whose task it is to create conditions leading to full employment and moderate inflation—found itself in a bind. The economy was in a free-fall, and unemployment rates were skyrocketing, but the usual tool it used to mitigate such problems—cutting the short- term federal funds rate (an interest rate banks charge each other for overnight loans)—was unavailable, because they had already cut the rate to zero (the lowest it could go). So they had to resort to unconventional monetary policies, among them something called “quantitative easing”. This involved the purchase, by the Federal Reserve, of financial assets like mortgage-backed securities and longer-term government debt (Treasury notes).[14]

Now, the nice thing about being the Federal Reserve is that when you want to buy something—in this case a bunch of financial assets—it’s really easy to pay for it: you have the power to create new money out of thin air! That’s what the Federal Reserve does; it controls the amount of money that exists. So if the Fed wants to buy, say, $10 million worth of securities from Bank of America, they just press a button and presto—$10 million dollars that didn’t exist a second ago comes into being as an asset of Bank of America.[15]

This quantitative easing policy was controversial. Many people worried that it would lead to runaway inflation. Generally speaking, the more money there is, the less each bit of it is worth. So creating more money makes things cost more—inflation. The Fed was creating money on a very large scale—on the order of a trillion dollars. Shouldn’t that lead to a huge amount of inflation?

Economist Art Laffer thought so. In June of 2009, he wrote an op-ed in the Wall Street Journal warning that “[t]he unprecedented expansion of the money supply could make the '70s look benign.”[16] (There was a lot of inflation in the ’70s.)

Another famous economist, Paul Krugman, accused Laffer of committing the fallacy of accident. While it’s generally true that an increase in the supply of money leads to inflation, that rule is not without exceptions. He had described such exceptional circumstances in 1998[17], and pointed out that the economy of 2009 was in that condition (which economists call a “liquidity trap”): “Let me add, for the 1.6 trillionth time, we are in a liquidity trap. And in such circumstances a rise in the monetary base does not lead to inflation.”[18]

It turns out Krugman was correct. The expansion of the monetary supply did not lead to runaway inflation; as a matter of fact, inflation remained below the level that the Federal Reserve wanted, barely moving at all. Laffer had indeed committed the fallacy of accident.

Begging the Question (Petitio Principii)

First things first: “begging the question” is not synonymous with “raising the question;” this is an extremely common usage, but it is wrong. You might hear a newscaster say, “Today Donald Trump’s private jet was spotted at the Indianapolis airport, which begs the question: ‘Will he choose Indiana Governor Mike Pence as running mate?’” This is a mistaken usage of “begs the question;” the newscaster should have said “raises the question” instead.

“Begging the question” is a translation of the Latin ‘petitio principii’, which refers to the practice of asking (begging, petitioning) your audience to grant you the truth of a claim (principle) as a premise in an argument—but it turns out that the claim you're asking for is either identical to, or presupposes the truth of, the very conclusion of the argument you're trying to make.

In other words, when you beg the question, you're arguing in a circle: one of the reasons for believing the conclusion is the conclusion itself! It’s a Fallacy of Illicit Presumption where the proposition being presumed is the very proposition you’re trying to demonstrate; that’s clearly an illicit presumption.

Here’s a stark example. If I'm trying to convince you that Donald Trump is a dangerous idiot (the conclusion of my argument is ‘Donald Trump is a dangerous idiot’), then I can't ask you to grant me the claim ‘Donald Trump is a dangerous idiot’. The premise can't be the same as the conclusion. Imagine a conversation:

Me: “Donald Trump is a dangerous idiot.”

You: “Really? Why do you say that?”

Me: “Because Donald Trump is a dangerous idiot.”

You: “So you said. But why should I agree with you? Give me some reasons.”

Me: “Here's a reason: Donald Trump is a dangerous idiot.”

And round and round we go. Circular reasoning; begging the question.

It's not always so blatant. Sometimes the premise is not identical to the conclusion, but merely presupposes its truth. Why should we believe that the Bible is true? Because it says so right there in the Bible that it’s the infallible Word of God. This premise is not the same as the conclusion, but it can only support the conclusion if we take the Bible's word for its own truthfulness, i.e., if we assume that the Bible is true. But that was the very claim we were trying to prove!

Sometimes the premise is just a re-wording of the conclusion. Consider this argument:

To allow every man unbounded freedom of speech must always be, on the whole, advantageous to the state; for it is highly conducive to the interests of the community that each individual should enjoy a liberty, perfectly unlimited, of expressing his sentiments.[19]

Replacing synonyms with synonyms, this comes down to “Free speech is good for society because free speech is good for society.” Not a good argument.[20]

Loaded Questions

Loaded questions are questions the very asking of which presumes the truth of some claim. Asking these can be an effective debating technique, a way of sneaking a controversial claim into the discussion without having outright asserted it.

The classic example of a loaded question is, “Have you stopped doing drugs?” Notice that this is a yes-or-no question, and no matter which answer one gives, one admits to doing drugs: if the answer is ‘no’, then the person continues to do drugs; if the answer is ‘yes’, then he admits to doing drugs in the past. Either way, he’s done drugs. The question itself presumes the truth of this claim; that’s what makes it “loaded”.

Strategic deployment of loaded yes-or-no questions can be an extremely effective debating technique. If you catch your opponent off-guard, they will struggle to respond to your question, since a simple ‘yes’ or ‘no’ commits them to the truth of the illicit presumption, which they want to deny. This makes them look evasive, shifty. And as they struggle to come up with a response, you can pounce on them: “It’s a simple question. Yes or no? Why won’t you answer the question?” It’s a great way to appear to be winning a debate, even if you don’t have a good argument. Imagine the following dialogue:

TV Host: “Are you or are you not in favor of the president’s plan to force wealthy business owners to pay their fair share in taxes to protect the vulnerable and aid this nation’s underprivileged?”

Guest: “Well, I don’t agree with the way you’ve laid out the question. As a matter of fact…”

Host: “It’s a simple question. Should business owners pay their fair share; yes or no?”

Guest: “You’re implying that the president’s plan would correct some injustice. But corporate taxes are already very…”

Host: “Stop avoiding the question! It’s a simple yes or no!”

Combine this with the sort of subconscious appeal to force discussed above—yelling, finger- pointing, etc.—and the host might come off looking like the winner of the debate, with his opponent appearing evasive, uncooperative, and inarticulate.

Another use for loaded questions is the particularly sneaky political practice of “push polling”. In a normal opinion poll, you call people up to try to discover what their views are about the issues. In a push poll, you call people up pretending to be conducting a normal opinion poll, pretending only to be interested in discovering their views, but with a different intention entirely: you don’t want to know what their views are; you want to shape their views, to convince them of something. And you use loaded questions to do it.

A famous example of this occurred during the Republican presidential primary in 2000:

George W. Bush was the front-runner, but was facing a surprisingly strong challenge from the upstart John McCain. After McCain won the New Hampshire primary, he had a lot of momentum. The next state to vote was South Carolina; it was very important for the Bush campaign to defeat McCain there and reclaim the momentum. So they conducted a push poll designed to spread negative feelings about McCain—by implanting false beliefs among the voting public. “Pollsters” called voters and asked, “Would you be more or less likely to vote for John McCain for president if you knew he had fathered an illegitimate black child?” The aim, of course, is for voters to come to believe that McCain fathered an illegitimate black child. But he did no such thing. He and his wife adopted a daughter, Bridget, from Bangladesh.

A final note on loaded questions: there’s a minimal sense in which every question is loaded. The social practice of asking questions is governed by implicit norms. One of these is that it’s only appropriate to ask a question when there’s some doubt about the answer. So every question carries with it the presumption that this norm is being adhered to, that it’s a reasonable question to ask, that the answer is not certain. One can exploit this fact, again to plant beliefs in listeners’ minds that they otherwise wouldn’t hold. In a particularly shameful bit of alarmist journalism, the cover of the July 1, 2016, issue of Newsweek asks the question, “Can ISIS Take Down Washington?” The cover is an alarming, eye-catching shade of yellow, and shows four missiles converging on the Capitol dome. The simple answer to the question, though, is ‘no, of course not’. There is no evidence that ISIS has the capacity to destroy the nation’s capital. But the very asking of the question presumes that it’s a reasonable thing to wonder about, that there might be a reason to think that the answer is ‘yes’. The goal is to scare readers (and sell magazines) by getting them to believe there might be such a threat.

False Choice

This fallacy occurs when someone tries to convince you of something by presenting it as one of limited number of options and the best choice among those options. The illicit presumption is that the options are limited in the way presented; in fact, there are additional options that are not offered. The choice you’re asked to make is a false choice, since not all the possibilities have been presented.

Most frequently, the number of options offered is two. In this case, you’re being presented with a false dilemma. I manipulate my kids with false choices all the time. My younger daughter, for example, loves cucumbers; they’re her favorite vegetable by far. We have a rule at dinner: you’ve got to choose a vegetable to eat. Given her ’druthers, she’d choose cucumber every night. Carrots are pretty good, too; they’re the second choice. But I need her to have some more variety, so I’ll sometimes lie and tell her we’re out of cucumbers and carrots, and that we only have two options: broccoli or green beans, for example. That’s a false choice; I’ve deliberately left out other options. I give her the false choice as a way of manipulating her into choosing green beans, because I know she dislikes broccoli.

Politicians often treat us like children, presenting their preferred policies as the only acceptable choice among an artificially restricted set of options. We might be told, for example, that we need to raise the retirement age or cut Social Security benefits across the board; the budget can’t keep up with the rising number of retirees. Well, nobody wants to cut benefits, so we have to raise the retirement age. Bummer. But it’s a false choice. There are any number of alternative options for funding an increasing number of retirees: tax increases, re-allocation of other funds, means-testing for benefits, etc.

Liberals are often ambivalent about free trade agreements. On the one hand, access to American markets can help raise the living standards of people from poor countries around the world; on the other hand, such agreements can lead to fewer jobs for American workers in certain sectors of the economy (e.g., manufacturing). So what to do? Support such agreements or not? Seems like an impossible choice: harm the global poor or harm American workers. But it may be a false choice, as this economist argues:

But trade rules that are more sensitive to social and equity concerns in the advanced countries are not inherently in conflict with economic growth in poor countries. Globalization’s cheerleaders do considerable damage to their cause by framing the issue as a stark choice between existing trade arrangements and the persistence of global poverty. And progressives needlessly force themselves into an undesirable tradeoff.

… Progressives should not buy into a false and counter-productive narrative that sets the interests of the global poor against the interests of rich countries’ lower and middle classes. With sufficient institutional imagination, the global trade regime can be reformed to the benefit of both.[21]

When you think about it, almost every election in America is a False Choice. With the dominance of the two major political parties, we’re normally presented with a stark, sometimes unpalatable, choice between only two options: the Democrat or the Republican. But of course, if enough people decided to vote for a third-party candidate, that person could win. Such candidates do exist. But it’s perceived as wasting a vote when you choose someone like that.

This fact was memorably highlighted on The Simpsons back in the fall of 1996, before the presidential election between Bill Clinton and Bob Dole. In the episode, the diabolical, scheming aliens Kang and Kodos (the green guys with the tentacles and giant heads who drool constantly) contrive to abduct the two major-party candidates and perform a “bio-duplication” procedure that allows Kang and Kodos to appear as Dole and Clinton, respectively. The disguised aliens hit the campaign trail and give speeches, making bizarre campaign promises.[22] When Homer reveals the subterfuge to a horrified crowd, Kodos taunts the voters: “It’s true; we are aliens. But what are you going to do about it? It’s a two-party system. You have to vote for one of us.” When a guy in the crowd declares his intention to vote for a third-party candidate, Kang responds, “Go ahead, throw your vote away!” Then Kang and Kodos laugh maniacally. Later, as Marge and Homer—chained together and wearing neck-collars—are being whipped by an alien slave-driver, Marge complains and Homer quips, “Don’t blame me; I voted for Kodos.”

Composition

The fallacy of Composition rests on an illicit presumption about the relationship between a whole thing and the parts that make it up. This is an intuitive distinction, between whole and parts: for example, a person can be considered as a whole individual thing; it is made up of lots of parts— hands, feet, brain, lungs, etc., etc. We commit the fallacy of Composition when we mistakenly assume that any property that all of the parts share is also a property of the whole. Schematically, it looks like this:

All of the parts of X have property P.

Any property shared by all of the parts of a thing is also a property of the whole.

Therefore, X has the property P.

The second premise is the illicit presumption that makes this argument go through. It is illicit because it is simply false: sometimes all the parts of something have a property in common, but the whole does not have that property.

Consider the 1980 U.S. Men’s Hockey Team. They won the gold medal at the Olympics that year, beating the unstoppable-seeming Russian team in the semifinals. (That game is often referred to as “The Miracle on Ice” after announcer Al Michaels’ memorable call as the seconds ticked off at the end: “Do you believe in miracles? Yes!”) Famously, the U.S. team that year was a rag-tag collection of no-name college guys; the average age on the team was 21, making them the youngest team ever to compete for the U.S. in the Olympics. The Russian team, on the other hand, was packed with seasoned hockey veterans with world-class talent.

In this example, the team is the whole, and the individual players on the team are the parts. It’s safe to say that one of the properties that all of the parts shared was mediocrity—at least, by the standards of international competition at the time. They were all good hockey players, of course— Division I college athletes—but compared to the Hall of Famers the Russians had, they were mediocre at best. So, all of the parts have the property of being mediocre. But it would be a mistake to conclude that the whole made up of those parts—the 1980 U.S. Men’s Hockey Team—also had that property. The team was not mediocre; they defeated the Russians and won the gold medal! They were a classic example of the whole being greater than the sum of its parts.

Division

The fallacy of Division is the exact reverse of the fallacy of Composition. It’s an inference from the fact that a whole has some property to a conclusion that a part of that whole has the same property, based on the illicit presumption that wholes and parts must have the same properties. Schematically:

X has the property P.

Any property of a whole thing is shared by all of its parts.

Therefore x, which is a part of X, has property P.

The second premise is the illicit presumption. It is false, because sometimes parts of things don’t have the same properties as the whole. George Clooney is handsome; does it follow that his large intestine is also handsome? Of course not. Toy Story 3 is a funny movie. Remember when Mr. Potato Head had to use a tortilla for his body? Or when Buzz gets flipped into Spanish mode and does the flamenco dance with Jessie? Hilarious. But not all of the parts of the movie are funny. When it looks like all the toys are about to be incinerated at the dump? When Andy finally drives off to college? Not funny at all![23]

V. Fallacies of Linguistic Emphasis

Natural languages like English are unruly things. They’re full of ambiguity, shades of meaning, vague expressions; they grow and develop and change over time, often in unpredictable ways, at the capricious collective whim of the people using them. Languages are messy, complicated. This state of affairs can be taken advantage of by the clever debater, exploiting the vagaries of language to make convincing arguments that are nevertheless fallacious. This exploitation involves the manipulation of linguistic forms to emphasize facts, claims, emotions, etc. that favor one’s position, and to de-emphasize those that do not. We will survey four techniques that fall under this heading.

Accent

This is one of the original 13 fallacies that Aristotle recognized in his Sophistical Refutations. Our usage, however, will depart from Aristotle’s. He identifies a potential for ambiguity and misunderstanding that is peculiar to his language—ancient Greek. That language—in written form—used diacritical marks along with the alphabet, and transposition of these could lead to changes in meaning. English is not like this, but we can identify a fallacy that is roughly in line with the spirit of Aristotle’s accent: it is possible, in both written and spoken English (along with every other language), to convey different meanings by stressing individual words and phrases. The devious use of stress to emphasize contents that are helpful to one’s rhetorical goals, and to suppress or obscure those that are not—that is the fallacy of accent.

There are a number of techniques one can use with the written word that fall in the category of accent. Perhaps the simplest way to emphasize favorable contents, and de-emphasize unfavorable ones, is to vary the size of one’s text. We see this in advertising all the time. You drive past a store that’s having a sale, which they advertise with a sign in the window. In the largest, most eye- catching font, you read, “70% OFF!” “Wow,” you might think, “that’s a really steep discount. I should go into the store and get a great deal.” At least, that’s what the store wants you to think. They’re emphasizing the fact of (at least one) steep discount. If you look more closely at the sign, however, you’ll see the things that they’re legally required to say, but that they’d like to de- emphasize. There’s a tiny ‘Up to’ in front of the gigantic ‘70% OFF!’. For all you know, there’s one crappy item that nobody wants, tucked in the back of the store, that’s discounted at 70%; everything else has much smaller discounts, or none at all. Also, if you squint really hard, you’ll see an asterisk after the ‘70% OFF!’, which leads to some text at the bottom of the poster, in the tiniest font possible, that reads, “While supplies last. See store details. Not available in all locations. Offer not valid weekends or holidays. All sales are final.” This is the proverbial “fine print”. It makes the sale look a lot less exciting. So they hide it.

Footnotes are generally a good place to hide unfavorable content. We all know that CEOs of big companies—especially banks—get paid ridiculous sums of money. Some of it is just their salary and stock options; those amounts are huge enough to turn most people off. But there are other perks that are so over-the-top, companies and executives feel like it’s best to hide them from the public (and their shareholders) in the footnotes of CEO contracts and SEC reports. Michelle Leder runs a website called footnoted.com, which is dedicated to combing through these documents and exposing outrageous compensation packages. She’s uncovered executives spending over $700,000 to renovate their offices, demanding helicopters in addition to their corporate jets, receiving millions of dollars’ worth of private security services, etc., etc. These additional, extravagant forms of compensation seem excessive to most people, so companies do all they can to hide them from the public.

Another abuse of footnotes can occur in academic or legal writing. Legal briefs and opinions and academic papers seek to persuade. If you’re writing such a document, and you relegate a strong objection to your conclusion to a brief mention in the footnotes[24], you’re de-emphasizing that point of view and making it less likely that the reader will reject your arguments. That’s a fallacious suppression of opposing content, a sneaky trick to try to convince people you’re right without giving them a forthright presentation of the merits (and demerits) of your position.

The fallacy of accent can occur in speech as well as writing. The audible correlate of “fine print” is that guy talking really fast at the end of the commercial, rattling off all the unpleasant side effects and legal disclaimers that, if given a full, deliberate presentation might make you less likely to buy the product they’re selling. The reason, by the way, that we know about such horrors as the possibility of driving while not awake (a side-effect of some sleep aids) and a four-hour erection (side-effect of erectile-dysfunction drugs), is that drug companies are required, by federal law, not to commit the fallacy of accent if they want to market drugs directly to consumers. They have to read what’s called a “major statement” that lists all of these side-effects explicitly, and no fair cramming them in at the end and talking over them really fast.

When we speak, how we stress individual words and phrases can alter the meaning that we convey with our utterances. Consider the sentence ‘These pretzels are making me thirsty.’ Now consider various utterances of that sentence, each stressing a different word; different meanings will be conveyed:

These pretzels are making me thirsty. [Not those over there, these right here.]

These pretzels are making me thirsty. [It’s not the chips, it’s the pretzels.]

These pretzels are making me thirsty. [Don’t try to tell me they’re not; they are.]

And so on. We can capture the various stresses typographically by using italics (or boldface or all- caps), but if we leave that out, we lose some of the meaning conveyed by the actual, stressed utterance. One can commit the fallacy of accent by transcribing someone’s speech in a way that omits stress-indicators, and thereby obscures or alters the meaning that the person actually conveyed.

Suppose a candidate for president says, “I hope this country never has to wage war with Iran.” The stress on ‘hope’ clearly conveys that the speaker doubts that his hopes will be realized; the candidate has expressed a suspicion that there may be war with Iran. This speech might set off a scandal: saying such a thing during an election could negatively affect the campaign, with the candidate being perceived as a war-monger; it could upset international relations. The campaign might try to limit the damage by writing an op-ed in a major newspaper, and transcribing the candidate’s utterance without any indication of stress: “The Senator said, ‘I hope this country never has to wage war with Iran.’ This is a sentiment shared by most voters, and even our opponent.” This transcription, of course, obscures the meaning of the original utterance. Without the stress, there is not additional implication that the candidate suspects that there will in fact be a war.

Quoting out of Context

Another way to obscure or alter the meaning of what someone actually said is to quote them selectively. Remarks taken out of their proper context might convey a different meaning than they did within that context.

Consider a simple example: movie ads. These often feature quotes from film critics, which are intended to convey the impression that the movie was well-liked by them. “Critics call the film ‘unrelenting’, ‘amazing’, and ‘a one-of-a-kind movie experience’”, the ad might say. That sounds like pretty high praise. I think I’d like to see that movie. That is, until I read the actual review from which those quotes were pulled:

I thought I’d seen it all at the movies, but even this jaded reviewer has to admit that this film is something new, a one-of-a-kind movie experience: two straight hours of unrelenting, snooze-inducing mediocrity. I find it amazing that not one single aspect of this movie achieves even the level of “eh, I guess that was OK.”

The words ‘unrelenting’ and ‘amazing’—and the phrase ‘a one-of-a-kind movie experience’—do in fact appear in that review. But situated in their original context, they’re doing something completely different than the movie ad would like us to believe.

Politicians often quote each other out of context to make their opponents look bad. In the 2012 presidential campaign, both sides did it rather memorably. The Romney campaign was trying to paint President Obama as anti-business. In a campaign speech, Obama once said the following:

If you’ve been successful, you didn’t get there on your own. You didn’t get there on your own. I’m always struck by people who think, well, it must be because I was just so smart. There are a lot of smart people out there. It must be because I worked harder than everybody else. Let me tell you something: there are a whole bunch of hardworking people out there. If you’ve got a business, you didn’t build that. Somebody else made that happen.

Yikes! What an insult to all the hard-working small-business owners out there. They didn’t build their own businesses? The Romney campaign made some effective ads, with these remarks playing in the background, and small-business people describing how they struggled to get their firms going. The problem is, that quote above leaves some bits out—specifically, a few sentences before the last two. Here’s the full transcript:

If you’ve been successful, you didn’t get there on your own. You didn’t get there on your own. I’m always struck by people who think, well, it must be because I was just so smart. There are a lot of smart people out there. It must be because I worked harder than everybody else. Let me tell you something: there are a whole bunch of hardworking people out there.

If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable American system that we have that allowed you to thrive. Somebody invested in roads and bridges. If you’ve got a business, you didn’t build that. Somebody else made that happen.

Oh. He’s not telling business owners that they didn’t build their own businesses. The word ‘that’ in “you didn’t build that” doesn’t refer to the businesses; it refers to the roads and bridges—the “unbelievable American system” that makes it possible for businesses to thrive. He’s making a case for infrastructure and education investment; he’s not demonizing small-business owners.

The Obama campaign pulled a similar trick on Romney. They were trying to portray Romney as an out-of-touch billionaire, someone who doesn’t know what it’s like to struggle, and someone who made his fortune by buying up companies and firing their employees. During one speech, Romney said: “I like being able to fire people who provide services to me.” Yikes! What a creep. This guy gets off on firing people? What, he just finds joy in making people suffer? Sounds like a moral monster. Until you see the whole speech:

I want individuals to have their own insurance. That means the insurance company will have an incentive to keep you healthy. It also means if you don’t like what they do, you can fire them. I like being able to fire people who provide services to me. You know, if someone doesn’t give me the good service that I need, I want to say I’m going to go get someone else to provide that service to me.

He’s making a case for a particular health insurance policy: self-ownership rather than employer- provided health insurance. The idea seems to be that under such a system, service will improve since people will be empowered to switch companies when they’re dissatisfied—kind of like with cell phones, for example. When he says he likes being able to fire people, he’s talking about being a savvy consumer. I guess he’s not a moral monster after all.

Equivocation

Typical of natural languages is the phenomenon of homonymy[25]: when words have the same spelling and pronunciation, but different meanings—like ‘bat’ (referring to the nocturnal flying mammal) and ‘bat’ (referring to the thing you hit a baseball with). This kind of natural-language messiness allows for potential fallacious exploitation: a sneaky debater can manipulate the subtleties of meaning to convince people of things that aren’t true—or at least not justified based on what they say. We call this kind of maneuver the fallacy of equivocation. Here’s an example:

Consider a banker; let’s call him Fred. Fred is the president of a bank, a real big-shot. He’s married, but he’s not faithful: he’s carrying on an affair with one of the tellers at his bank, Linda. Fred and Linda have a favorite activity: they take long lunches away from their workplace, having romantic picnics at a beautiful spot they found a short walk away. They lay out their blanket underneath an old, magnificent oak tree, which is situated right next to a river, and enjoy champagne and strawberries while canoodling and watching the boats float by.

One day—let’s say it’s the anniversary of when they started their affair—Fred and Linda decide to celebrate by skipping out of work entirely, spending the whole day at their favorite picnic spot. (Remember, Fred’s the boss, so he can get away with this.) When Fred arrives home that night, his wife is waiting for him. She suspects that something is up: “What are you hiding, Fred? Are you having an affair? I called your office twice, and your secretary said you were ‘unavailable’ both times. Tell me this: Did you even go to work today?” Fred replies, “Scout’s honor, dear. I swear I spent all day at the bank today.”

See what he did there? ‘Bank’ can refer either to a financial institution or the side of a river—a riverbank. Fred and Linda’s favorite picnic spot is on a riverbank, and Fred did indeed spend the whole day at that bank. He’s trying to convince his wife he hasn’t been cheating on her, and he exploits this little quirk of language to do so. That’s equivocation.

A similar linguistic phenomenon can also be exploited to equivocate: polysemy.[26] This is distinct from, but similar to, homonymy. The meanings of homonyms are typically unrelated. In polysemy, the same word or phrase has multiple, related meanings—different senses. Consider the word ‘law.’ The meaning that comes immediately to mind is the statutory one: “A rule of conduct imposed by authority.”[27] The state law prohibiting murder is an instance of a law in this sense. There is another sense of ‘law’, however; this is the sense operative when we speak of scientific laws. These are regularities in nature—Newton’s law of universal gravitation, for example. These meanings are similar, but distinct: statutes, human laws, are prescriptive; scientific laws are descriptive. Human laws tell us how we ought to behave; scientific laws describe how things actually do, and must, behave. Human laws can be violated: I could murder someone. Scientific laws cannot be violated: if two bodies have mass, they will be attracted to one another by a force directly proportional to the product of their masses and inversely proportional to the square of the distance between them; there’s no getting around it.

A common argument for the existence of God relies on equivocation between these two senses of ‘law’:

There are laws of nature.

By definition, laws are rules imposed by an Authority. So the laws of nature were imposed by an Authority.

The only Authority who could impose such laws is an all-powerful Creator—God.

Therefore, God exists.

This argument relies on fallaciously equivocating between the two senses of ‘law’—human and natural. It’s true that human laws are by definition imposed by an authority; but that is not true of natural laws. Additional argument is needed to establish that those must be so imposed.

A famous instance of equivocation of this sort occurred in 1998, when President Bill Clinton denied having an affair with White House intern Monica Lewinsky by declaring forcefully in a press conference: “I did not have sexual relations with that woman—Ms. Lewinsky.” The president wanted to convince his audience that nothing sexually inappropriate had happened, even though, as was revealed later, lots of sex stuff had been going on. He does this by taking advantage of the polysemy of the phrase ‘sexual relations.’ In the broadest sense, the phrase connotes sexual activity of any kind—including oral sex (which Bill and Monica engaged in). This is the sense the president wants his audience to have in mind, so that they’re convinced by his denial that nothing untoward happened. But a more restrictive sense of ‘sexual relations’—a bit more old-fashioned usage—refers specifically to intercourse (which Bill and Monica did not engage in). It’s this sense that the president can fall back on if anyone accuses him of having lied; he can claim that, strictly speaking, he was telling the truth: he and Monica didn’t have ‘relations’ in the intercourse sense. Clinton later admitted to “misleading” the American people—but, importantly, not to lying.

The distinction between lying and misleading is a hard one to draw precisely, but roughly speaking it’s the difference between trying to get someone to believe something false by saying something false (lying) and trying to get them to believe something false by saying something true but deceptive (misleading). Besides homonymy and polysemy, yet another common linguistic phenomenon can be exploited to this end. This phenomenon is implicature, identified and named by the philosopher Paul Grice in the 1960s.[28] Implicatures are contents that we communicate over and above the literal meaning of what we say—aspects of what we mean by our utterances that aren’t stated explicitly. People listening to us infer these additional meanings based on the assumption that the speaker is being cooperative, observing some unwritten rules of conversational practice. To use one of Grice’s examples, suppose your car has run out of gas on the side of the road, and you stop me as I walk by, explaining your plight, and I say, “There’s a gas station right around the corner.” Part of what I communicate by my utterance is that the station is open and selling gas right now—that you can go there and solve your problem. You can infer this content based on the assumption that I’m being a cooperative conversational partner; if the station is closed or out of gas—and I knew it—then I would be acting unhelpfully, uncooperatively. Notice, though, that this content is not part of what I literally said: all I told you is that there is a gas station around the corner, which would still be true even if it were closed and/or out of gas.

Implicatures are yet another subtle aspect of meaning in natural language that can be exploited. So a final technique that we might classify under the fallacy of equivocation is false implication— saying things that are strictly speaking true, but which communicate false implicatures. Grocery stores do this all the time. You know those signs posted under, say, cans of soup that say “10 for $10”? That’s the store’s way of telling us that soup’s on sale for a buck a can; that’s right, you don’t need to buy 10 cans to get the deal; if you buy one can, it’s $1; 2 cans are $2, and so on. So why not post a sign saying “$1 per can”? Because the 10-for-$10 sign conveys the false implicature that you need to buy 10 cans in order to get the sale price. The store’s trying to drive up sales.

A striking example of false implicature is featured in one of the most prominent U.S. Supreme Court rulings on perjury law. In the original criminal case, a defendant by the name of Bronston had the following exchange with the prosecuting attorney:

Q. Do you have any bank accounts in Swiss Banks, Mr. Bronston?

A. No, sir.

Q. Have you ever?

A. The company had an account there for about six months, in Zurich.[29]

As it turns out, Bronston did not have any Swiss bank accounts at the time of the questioning, so his first answer was strictly true. But he did have Swiss bank accounts in the past. However, his second answer does not deny this. All he says is that his company had Swiss bank accounts—an answer that implicates that he himself did not. Based on this exchange, Bronston was convicted of perjury, but the Supreme Court overturned that conviction, pointing out that Bronston had not made any false statements (a requirement of the perjury statute); the falsehood he conveyed was an implicature.[30]

Manipulative Framing

Words are powerful. They can trigger emotional responses and activate associations with related ideas, altering the way we perceive the world and conceptualize issues. The language we use to describe a particular policy, for example, can affect how favorably our listeners are likely to view that proposal. How we frame issues with language can profoundly influence how persuasive our arguments about those issues will be. The technique of choosing words to frame issues intentionally to manipulate your audience is what we will call the fallacy of manipulative framing.

The importance of framing in politics has long been recognized, but only in recent decades has it been raised to an art form. One prominent practitioner of the art is Republican consultant Frank Luntz. In a 200-plus page memo he sent to Congressional Republicans in 1997, and later in a book,[31] Luntz stressed the importance of choosing persuasive language to frame issues so that voters would be more likely to support Republican positions on issues. One of his recommendations illustrates manipulative framing nicely. In the United States, if you leave a fortune to your heirs after you die, then the government taxes it (provided it’s greater than about $5.5 million, or $11 million for a couple, as of 2016). The usual name for this tax is the ‘estate tax’. Luntz encouraged Republicans—who are generally opposed to this tax—to start referring to it instead as the “death tax”. This framing is likelier to cause voters to oppose the tax as well: taxing people for dying? Talk about kicking a man when he’s down! (Polling bears this out: people oppose the tax in higher numbers when it’s called the ‘death tax’ than when it’s called the ‘estate tax’).

The linguist George Lakoff has written extensively on the subject of framing.[32] His remarks on the subject of “tax relief” nicely illustrate how framing works:

On the day that George W. Bush took office, the words tax relief started appearing in White House communiqués to the press and in official speeches and reports by conservatives. Let us look in detail at the framing evoked by this term.

The word relief evokes a frame in which there is a blameless Afflicted Person who we identify with and who has some Affliction, some pain or harm that is imposed by some external Cause-of-pain. Relief is the taking away of the pain or harm, and it is brought about by some Reliever-of-pain.

The Relief frame is an instance of a more general Rescue scenario, in which there a Hero (The Reliever-of-pain), a Victim (the Afflicted), a Crime (the Affliction), A Villain (the Cause-of-affliction), and a Rescue (the Pain Relief). The Hero is inherently good, the Villain is evil, and the Victim after the Rescue owes gratitude to the Hero.

The term tax relief evokes all of this and more. Taxes, in this phrase, are the Affliction (the Crime), proponents of taxes are the Causes-of Affliction (the Villains), the taxpayer is the Afflicted Victim, and the proponents of “tax relief” are the Heroes who deserve the taxpayers’ gratitude.

Every time the phrase tax relief is used and heard or read by millions of people, the more this view of taxation as an affliction and conservatives as heroes gets reinforced.[33]

Carefully chosen words can trigger all sorts of mental associations, mostly at the subconscious level, that affect how people perceive the issues and have the power to change opinions. That’s why manipulative framing is ubiquitous in public discourse.

Consider debates about illegal immigration. Those who are generally opposed to policies that favor such people will often refer to them as “illegal immigrants”. This framing emphasizes the fact that they are in this country illegally, making it likelier that the listener will also oppose policies that favor them. A further modification can further increase this likelihood: “illegal aliens.” The word ‘alien’ has a subtle dehumanizing effect; if we don’t think of them as individual people with hopes and dreams, we’re not likely to care much about them. Even more dehumanizing is a framing one often sees these days: referring to illegal immigrants simply as “illegals”. They are the living embodiment of illegality! Those who advocate on behalf of such people, of course, use different terminology to refer to them: “undocumented workers”, for example. This framing de-emphasizes the fact that they’re here illegally; they’re merely “undocumented”. They lack certain pieces of paper; what’s the big deal? It also emphasizes the fact that they are working, which is likely to cause listeners to think of them more favorably.

The use of manipulative framing in the political sphere extends to the very names that politicians give the laws they pass. Consider the healthcare reform act passed in 2010. Its official name is The Patient Protection and Affordable Care Act. Protection of patients, affordability, care—these all trigger positive associations. The idea is that every time someone talks about the law prior to and after its passage, they will use the name with this positive framing and people will be more likely to support it. As you may know, this law is commonly referred to with a different moniker: ‘Obamacare’. This is the framing of choice for the law’s opponents: any negative associations people have with President Obama are attached to the law; and any negative feelings they have about healthcare reform get attached to Obama. Late night talk show host Jimmy Kimmel demonstrated the effectiveness of framing on his show one night in 2013. He sent a crew outside his studio to interview people on the street and ask them which approach to health reform they preferred, the Affordable Care Act or Obamacare. Overwhelmingly, people expressed a preference for the Affordable Care Act over Obamacare, even though those are just two different ways of referring to the same piece of legislation. Framing is especially important when the public is ignorant of the actual content of policy proposals, which is all too often the case.

  1. This chapter is based on Fundamental Methods of Logic, by Matthew Knachel. ↑

  2. Many of the fallacies have Latin names, as identifying fallacies has been an occupation of logicians since ancient times, and because ancient and medieval European work comes down to us in Latin, which was the language of European scholarship for centuries. ↑

  3. You can watch this commercial for yourself at the following link: Grey Poupon commercial. ↑

  4. International Action Center, Feb. 4 2005, http://iacenter.org/folder06/stateoftheunion.htm ↑

  5. Comparing your opponent to Hitler—or the Nazis—is quite common. Some clever folks came up with a fake-Latin term for the tactic: Argumentum ad Nazium (cf. the real Latin phrase, ad nauseum—to the point of nausea). Such comparisons are so common that author Mike Godwin formulated “Godwin's Law of Nazi Analogies: As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one.” (“Meme, Counter-meme,” Wired, 10/1/94) ↑

  6. David French, National Review, 2/14/16 ↑

  7. People often offer red herring arguments unintentionally, without the subtle deceptive motivation to change the subject—usually because they’re just parroting a red herring argument they heard from someone else. Sometimes a person’s response will be off-topic, apparently because they weren’t listening to their interlocutor or they’re confused for some reason. I prefer to label such responses as instances of Missing the Point (Ignoratio Elenchi), a fallacy that some books discuss at length, but which I’ve just relegated to a footnote. ↑

  8. Although, in the case of short-lived bacteria, where we can study multiple generations of development across the span of a few weeks, we can watch evolution take place in real time, as demonstrated by scientists from the Kishony Lab at Harvard Medical School and Technion: https://hms.harvard.edu/news/bugs-screen. ↑

  9. Check it out: https://www.cdc.gov/fluoridation/ ↑

  10. I taught with this example for years before I finally looked it up to see if the premise was true. It is not. Ice cream sales and shark attacks peak in different months. However, remember that we’re looking at the connection between premises and conclusion when we evaluate logic. IF the premise were true, how much evidence would it give for the conclusion? Answer: not very much at all. ↑

  11. Another method of insuring the sample accurately reflects the population at large is to construct a ”representative sample,” purposefully making sure the sample has the same proportions of key demographics as the population at large. ↑

  12. Like most of my statistics, this is completely made up for the sake of example. ↑

  13. That’s Dirty Harry’s gun, “the most powerful handgun in the world.” ↑

  14. The hope was to push down interest rates on mortgages and government debt, encouraging people to buy houses and spend money instead of saving it—thus stimulating the economy. ↑

  15. It’s obviously a bit more complicated than that, but that’s the essence of it. ↑

  16. Art Laffer, “Get Ready for Inflation and Higher Interest Rates,” June 11, 2009, Wall Street Journal ↑

  17. “But if current prices are not downwardly flexible, and the public expects price stability in the long run, the economy cannot get the expected inflation it needs; and in that situation the economy finds itself in a slump against which short- run monetary expansion, no matter how large, is ineffective.” From Paul Krugman, "It's baack: Japan's Slump and the Return of the Liquidity Trap,” 1998, Brookings Papers on Economic Activity, 2 ↑

  18. Paul Krugman, June 13, 2009, The New York Times ↑

  19. This is a classic example, from Richard Whately’s 1826 Elements of Logic. ↑

  20. Though it’s valid! P, therefore P is a valid form: if the premise is true, the conclusion must be; they’re the same. ↑

  21. Dani Rodrik, “A Progressive Logic of Trade,” Project Syndicate, 4/13/2016 ↑

  22. Kodos: “I am Clin-ton. As overlord, all will kneel trembling before me and obey my brutal command. End communication.” ↑

  23. I admit it: I teared up a bit; I’m not ashamed. ↑

  24. Or worse, the endnotes: people have to flip all the way to the back to see those. ↑

  25. Greek word, meaning ‘same name’. ↑

  26. Greek word, meaning ‘many signs (or meanings)’. ↑

  27. From the Oxford English Dictionary. ↑

  28. See his Studies in the Way of Words, 1989, Cambridge: Harvard University Press. ↑

  29. Bronston v. United States, 409 US 352 - Supreme Court 1973 ↑

  30. The court didn’t use the term ‘implicature’ in its ruling, but this was the thrust of their argument. ↑

  31. Frank Luntz, 2007, Words That Work: It’s Not What You Say, It’s What People Hear. New York: Hyperion. ↑

  32. See, e.g., his 2004 book, Don’t Think of an Elephant!, White River Junction, Vermont: Chelsea Green Publishing. ↑

  33. George Lakoff, 2/14/2006, “Simple Framing,” Rockridge Institute. ↑

Annotate

Next Chapter
8 - Scientific Reasoning
PreviousNext
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org