Confirmation Bias, Total Evidence Requirement, and Falsificationism

Overview

In the section on biases we learned that there are two basic categories of ways we can reason poorly: (1) How we think and (2) what we think. This lesson will continue to focus on the former. Our brain is hardwired in certain ways that cause up to reason poorly. These are called cognitive biases. The most common cognitive bias is called confirmation bias. Confirmation bias causes us to focus only on confirming evidence for our view while ignoring or trivializing any disconfirming evidence. Confirmation bias can also lead us to evaluate evidence within an misleading context. When we engage in confirmation bias we commit a fallacy called the fallacy of confirming evidence. One important tool to avoid confirmation bias is the idea of falsificationism: There are an infinite number of ways to confirm a hypothesis so instead we should seek to disconfirm it. That is, we should try to disprove claims and hypotheses instead of trying prove them. Another way to avoid confirmation bias is to appeal to the total evidence requirement. To meet total evidence requirement we must consider not only evidence in favor of a claim but also against it.

Introduction

Before you begin reading, click on the link and do the test.
.
.
.
.
.
.
.
.
.
.
Hey! No cheating! I said click on the above link!
.
.
.
.
.
.
.
.
.
.
If you’re like 99% of humans on the planet you fell prey to confirmation bias. Let’s take a few step backs before explaining what that means…

Cognitive Biases and Confirmation Bias

We can fail to reason well because the hard-wiring in our brains affects the way the information is presented and interpreted. A cognitive bias is when our brain’s hard-wiring has an unconscious effect on our reasoning. It is a current area of debate as to whether cognitive biases are on the whole beneficial or detrimental to our reasoning. However, we’ll set this issue aside for this class and operate under the assumption that in many instances cognitive biases do negatively influence our capacity to reason well.

confirmation%20bias.jpg

There are hundreds of cognitive biases but the one that causes the most errors in reasoning is called confirmation bias. Confirmation bias is when we only report the “hits” and ignore the “misses”; in other words, we only include information/evidence/reasons in our argument that support our position and we ignore information that disconfirms. Stereotyping is often the result of confirmation bias. And like stereotyping, confirmation bias is often (but not always) unintentional and everyone does it to some degree (except me). Confirmation bias is a problem because in order to know if a conclusion is well-supported we need to evaluate it in light of all evidence—positive and negative. To correctly evaluate a claim we must meet the total evidence requirement: we must evaluate all evidence in favor and against a claim. Taking into account only positive evidence gives us a distorted picture of a claim’s truth. 

When an argument engages in confirmation bias it commits the fallacy of confirming evidence1.

Let’s consider an example.

It’s a fairly widespread belief that men are better drivers than women. But how do people arrive at this belief? If you hold this belief here’s what will typically happen. A driver cuts you off. You look and notice it’s a woman. You think to yourself, “typical women drivers.” Hypothesis confirmed! You get cut off again. You look and notice it’s a guy. You shrug but don’t think “typical male driver.” You also don’t think “oh, maybe men and women are both bad drivers.” Maybe you think to yourself, “that guy was a jerk.” But you never consider the incident as evidence for the claim that men are bad drivers. Nope. You totally forget about that incident and didn’t count it against your hypothesis.

This is how confirmation bias works. We have a belief or hypothesis and anytime we encounter confirming evidence we whisper softly to ourselves, “See! I knew it!”. But anytime something doesn’t fit our hypothesis we ignore it and fail to adjust our hypothesis. Our brain is a confirmation seeking machine, not a truth-seeking machine.

If our brain is wired for bad reasoning in a way that’s invisible to us, how do we avoid doing it?

Falsificationism

It’d be helpful to bring in a little philosophy here. Please meet my good friend Carl Popper (no relation to the inventor of the popular snack food known as Jalepeno Poppers).

Popper made a very important philosophical observation in regards to how we can test a hypothesis: he said we cannot test for a hypothesis’ truth, but rather we can only test for its falsity. The method of testing a hypothesis by trying to disconfirm rather than confirm it is called falsificationism. There are infinitely many ways to show that a hypothesis is true, but it only requires one2 to show that it is false. We should focus on looking to falsify rather than to confirm–otherwise I risk engaging in confirmation bias.

In fancy-talk we refer to an instance of a falsification as a counterexample. A counterexample is an example that contradicts a general claim (more on this in a moment).

Let’s go back to our “women are bad drivers” hypothesis. How many ways can I confirm my hypothesis? Any time a woman cuts me off. Every time a woman cuts me off it’s true that a woman cut me off. That is, all the premises supporting my claim are true. But what does falsificationism tell me to do? I need to see if there are counterexamples. I.e., times when I get cut off but it’s false that I was cut off by a woman. In other words I need to also count how often do I get cut off by men. To really determine the truth of the matter I need to keep track of how many women cut me off per period of time/distance vs how many men cut me off per period of time/distance. Then I should compare the numbers.3When I approach the issue with falsificationism in mind, I’m considering both confirming evidence and disconfirming evidence and I’ll have a much more accurate picture of relative driving ability.

To illustrate one more time how confirmation bias and falsificationism work, let’s return to the number-pattern test. In this case the same principles are in play. You were given a series of numbers and asked to identify the ordering principle that describes the ordering pattern. Suppose, (unbeknownst to you) the ordering principle is any 3 numbers in ascending order. How did you go about trying to discover the ordering principle? You looked at the example that conformed to the rule (suppose it’s 2, 4, 6), and like most people thought it was something to do with ascending even numbers evenly spaced. You looked at the sample pattern and tried to make more patterns that confirmed your hypothesis.

For instance, You might have thought, “Ah ha! the pattern is successive even numbers!” So, you tested your hypothesis with {8, 10, 12}. The “game” replied that the set of numbers matches the pattern. Now you have confirmation of your hypothesis that the pattern is successive even numbers. Success! Next, you want to further confirm your hypothesis so you guess 20, 22, 24. Further confirmation again! Wow! You are definitely right! Now, you type your hypothesis, “successive even numbers,” but it says you are wrong. What happened? You just had 2 instances where your hypothesis was confirmed?!

Here’s the dealy-yo. You can confirm your hypothesis until the cows come home. That is, there are infinitely many ways to confirm a hypothesis. In the number game this means there are infinitely many sets of evenly spaced ascending even numbers. However, as Popper noted, rather than try to confirm your hypothesis, what you need to do is to ask questions that will falsify it. So, instead of testing number patterns that confirm what you think the pattern is, you should test number sequences that would prove your hypothesis to be false. That is, instead of plugging in more instances of successive even numbers you should see how the game responds to different types of sequences like 3, 4, 5 or 12, 4, 78. If the game accepts these sets, then you know your (initial) hypothesis (i.e., the pattern is evenly spaced ascending even numbers) is false.

If you test sequences by trying to find counterexamples you can eventually arrive at the correct ordering principle. But if you only test hypothesis that further confirm your existing hypothesis, you can never encounter the necessary evidence that leads you to reject it. If you never reject your incorrect hypothesis, you’ll never get to the right one! Ah! It seems sooooooo simple when you have the answer!

As Critical Thinkers, Why do we Care about Confirmation Bias and Falsificationism?

Most arguments are (hopefully) presented with evidence. However, (usually) the evidence that is presented is only confirming evidence. As we learned from the bad-driving example we not only need positive evidence, we need to consider disconfirming evidence. And as we learned from the number-pattern example, positive evidence can support any number of hypothesis. To identify the best hypothesis we need to try to disconfirm as many hypotheses as possible. In other words, we need to look for evidence that can make our hypothesis false. The hypothesis that stands up best to falsification attempts has the highest (provisional) likelihood of being true.

As critical thinkers, when we evaluate evidence we should look to see not only if the arguer has made an effort to show why the evidence supports their hypothesis and not another, but also what attempt has been made to consider disconfirming evidence. We should also be aware of confirmation bias in our own arguments.

Falsificationism has a second important implication for either argument construction or evaluation. A good hypothesis or claim has to be falsifiable. That is, there has to be, in principle, some way that it could be shown to be false. If your claim or hypothesis is untestable, it’s not worth much. You need to at least be able to identify what would make a claim false.

Here’s an example:

There is an invisible man that follows me everywhere I go.

Ask yourself if there’s any way to disprove this claim. I can give you lots of (not very good) confirming evidence: E.g., I can just feel his presence. He talks to me at night and lets me know he’s there.  However, it doesn’t seem like there’s any kind of evidence you could give me to prove definitively that the invisible man doesn’t exist.

Slanting by Omission: How to Detect and Create BS

Slanting by omission is a subspecies of confirmation bias that is often intentionally used to manipulate an audience. The general idea is to get the audience to focus on confirming evidence. Slanting by omission, as you might have guessed, is when important information is left out of an argument to create a favorable bias.

The anti-vaccine and anti-GMO movements along with conspiracy theories and political websites love this trick. Let’s look an example:

Headline from an anti-GMO news release.: 100% of Wines Test Positive of Glyphosate!!!1111!!!111!!!!!
Let’s take a peak inside and see how they got these results.
The report begins:

On March 16th, 2016 Moms Across America received results from an anonymous supporter which commissioned Microbe Inotech Lab of St.Louis, Missouri that showed all ten of the wines tested positive for the chemical glyphosate, the declared “active” ingredient in Roundup weedkiller and 700 other glyphosate ­based herbicides.

Before even talking about confirmation bias we should revisit our last lesson on personal and group biases. What is the source of the data? “An anonymous supporter.” Seems legit…

What was the sample size? Ten. Ten freakin’ bottles. Now let’s go back to the headline. If you had only read the headline and hadn’t read the actual article (like 99% of the population), how many kinds and brands of wine would you have thought contained glyphosate? 100%!!!!1111!!!1 This is a textbook example of slanting by omission. They’re omitting the fact that only 10 bottles got tested.

Next we return to confirmation bias. We’re provided no methodology for the testing. How do we know that only 10 bottles were originally submitted for testing? Suppose 1000 bottles were submitted and only 10 came back positive translating to a rate of 1% containing (trace) amounts of glyphosate. Given no methodology and the fact that the source likely has at least a vested interest in a positive result, it’s not unfair to assume we have an instance of the fallacy of confirming instances.

But wait! There’s more! Look at the amounts of glyphosate contained in the wine. The highest concentration was 18 ppb. Ppb means parts per billion. In every day language, this means “chemically insignificant trace amounts”. In concrete terms, 1 ppb is equivalent to 1 second/11 days, 1 inch in 16 miles, or 1 car in bumper to bumper traffic from Cleveland to San Fransisco.4 This is a textbook example of slanting by omission. They haven’t said anything false (here) but the information they omit presents a false image. What we really need to know is if there are any health concerns for glyphosate at 18 ppb. If there aren’t any, then we shouldn’t care. It turn out that glyphosate is less toxic than baking soda and even salt. Think about how much salt or baking soda you consume in a day—it’s orders of magnitude greater that even 100 ppb. 

but did you die though

The EPA has set the safe daily does of glyphosate at 30 000 ppb. To reach that level you’d have to drink about 2 500 glasses of wine per day for 70 years. Incidentally, we might flip the Mom’s Across America argument on its head. If toxic chemicals in wine are the genuine concern, shouldn’t we be concerned about the alcohol in the wine? Alcohol is easily one of the most toxic chemicals we consume5 and wine is made of about 14% alcohol!!!111!!!! Add to that that, *gasp*, 100% of wines contain alcohol!!111!!!!11!!

Common Technique for Slanting by Omission: Neglecting to Consider that Dose Makes the Poison. 
Aside from the beautiful example of slanting by omission there’s another important principle to grasp that will come up throughout the course:The dose makes the poison. This means that toxicity is a matter of dose. There is not a single chemical in the universe that isn’t toxic at some dose. Both oxygen and water will kill you if the concentration or dose are high enough. When someone screams “but it has X in it” the first questions that should come to mind are “at what dose? and what dose is toxic?” This is a favorite tactic of anti-science and pseudoscience movements. As you become aware of it, you’ll see it everywhere…like you’re Neo in the critical thinking Matrix or like you’re John Nash as portrayed by Russell Crow in a Beautiful Mind.

dose%20makes%20the%20poison.jpg

Alternative medicine research is another common area were we (almost always) see slanting by omission. Alternative medicine will often report that their treatment shows some effect. Confirmation! It works! What they neglect to say is that the effect is exactly the same size effect as we see in the placebo group; that is, in the group that doesn’t get any treatment. An effect that is equal to the effect of having no treatment is no effect at all. Throughout the course we’ll take a closer look at particular cases like acupuncture, supplements, and chiropractic.

One final area where slanting by omission is guaranteed to be found is on political websites and media. Very often when reporting about the other team, important contextual information will be left out to create a distorted narrative. Also, when reporting on one’s own team, websites and media will neglect to conveniently forget to mention negative facts. When we evaluate policy we must compare benefits to benefit and costs to costs. Political media outlets and politicians will typically compare benefits of their favored policy to the costs of the other party’s. This isn’t a fair comparison. 

Confirmation Bias and the Scientific Method
We’ll discuss the scientific method in more detail later in the course but a couple of notes are relevant for now. The scientific method seeks to systematically prevent confirmation bias (although, just as in any human enterprise, it sometimes creeps in). There are specific procedures and protocols to minimize its effect. When evaluating arguments, you should keep them in mind. Here are a few:

  • Peer review: When a scientist publishes an article, it is made available to a community of peers for criticism. If the source of the information isn’t peer reviewed, be skeptical.
  • Double-blinding (usually in medical research): To avoid bias, neither the participant or the tester knows if the treatment is real medicine or just sugar pills. If a study isn’t blinded, it’s extremely easy for the results to be biased.
  • Control group: To measure if there’s a genuine effect you have to know what the baseline rate/incidence level is. For example, if I’m testing a treatment and 25% get better, before I can say the medicine caused the improvement I need to know how many people got better without treatment. If 25% got better without treatment then the treatment had no important effect.

Summary
Confirmation bias is when only confirming evidence and reasons are cited, and falsifying evidence is ignored. Arguments that are the product of confirmation bias commit the fallacy of confirming evidence. A genuine evaluation of a claim requires meeting the total evidence requirement

A good way to test a hypothesis or argument is to ask whether it’s possible for all the premises to true and the conclusion to be false; that is, we should try to falsify it rather than confirm it by using counterexamples. Instead of emphasizing confirming evidence, a good argument also tries to show why counterexamples fail. In other words, a good argument shows why, if all the premises are true we must also accept the particular conclusion rather than another one.

Slanting by omission is when important information (relative to the conclusion) is left out of an argument.

 

HOMEWORK

A. (a) Evaluate whether the following statements or policies are falsifiable. (b) Explain why or why not. (c) If it is falsifiable, suggest what sort of evidence would falsify it.

1. Everything happens for a reason.

2. Thimerosal in vaccines causes autism.

3. GMOs cause cancer.

4. In 1984 as part of the War on Drugs the DEA offered training to local and state police agencies to help them identify drug dealers and couriers (called Operation Pipeline). In the program officers learned to use minor traffic violations as a pretext to pull someone over and search them. They gave them a list of criteria to identify potential drug couriers in airports and train stations. Here is the comprehensive list of criteria officers may use to stop and investigate someone:

Traveling with luggage, traveling without luggage, driving an expensive car, driving a car that needs repairs, driving with out-of-state plates, driving a rental car, driving with “mismatched occupants,” acting too calm, acting too nervous, dressing casually, wearing expensive clothing or jewelry, being one of the first to deplane, being one of the last to deplane, deplaning in the middle, paying for a ticket in cash, using large denominations currency, using small-denominations currency, traveling alone, traveling with a companion. Troops were also told to “be suspicious of scrupulous obedience to traffic laws.” (David Cole, No Equal Justice: Race and Class in the American Criminal Justice System. 1999 p. 47.)

Suppose an officer pulls you over and says he has reason to suspect you are a drug dealer. Is his claim falsifiable? 

B. Explain (a) why the following are errors in reasoning and (b) what additional information would be required to correctly evaluate the arguments/claims. Hint: Think about disconfirming evidence and base rate.

1. Students always drink coffee in my class. I just saw 3 students drinking coffee when I walked in the room.

2. I know students like my class because when I asked a few of them they told me so.

3. Last year there were 700 deaths caused by airline travel. There’s no way I’m getting on this plane. I’m gonna drive to Chicago instead. 

4. You shouldn’t take vaccines because they contain toxic chemicals like aluminum and formaldehyde.

5. I’m a really good gambler. Today I won $1000 at the slots! 

6. Trump is untrustworthy because he deliberately misleads voters. For example, he keeps saying that crime rates are at an all-time high. They’re actually been declining for the last 25 years.

7. Hilary Clinton is untrustworthy. She said that she didn’t have classified emails on her server but she did.

 

C. Meme:  Go to http://memegenerator.net/
a.  Pick an Issue:  b.  Design a meme for each side of the issue that either commits the fallacy of confirming instances and slants by omission to deliberately mislead.  c.  Provide the additional information to correct/counter/give context to the point the argument in the meme is making.

Here are some possible topics:

1. Whether Trump will be a great president.

2. Whether Obama was a great president.

3. Whether access to guns causes harm or helps.

4. Whether the ACA is good or bad.

5. Whether the internet has made learning easier. 

6. Pick your own.

Here are some example memes from previous years: 

antarctic ice.jpg

driving.jpgsmoking monkey.jpgObamacare Meme Con.jpgObamacare Meme Pro.jpg

gunz.jpg

gun death.jpg

Advertisement