Causal Reasoning 2: Evaluating Causal Claims

Evaluating Causal Claims

In this section we’ll learn how to evaluate causal claims and to distinguish kinds of causal claims. Depending on whether a cause is physical, psychological, sociological, or economic we will evaluate it a bit differently. Causal claims can also be either generalizations or about specific events. In the former case we’ll need to consider the same criteria we use for any kind of generalization (sample size, representativeness, and selection bias). Once we distinguish the kinds of causal claims, we’ll look at the basic method for evaluating all of them.

Kinds of Causal Claims

1. The milk spilled because I bumped it with my elbow.
2. Reading on a bright screen just before bed can cause insomnia.
3. The baby is crying because he’s grumpy.
4. Text messaging is popular because you can avoid asking people about their day and just get to the point.
5. John is unemployed right now because the economy is weak.
6. Young men act aggressively because they are trying to signal dominance to other males.

All of the above are causal claims. In terms of their scope, causal claims can be divided into two categories: particular causal claims and general causal claims. Particular causal claims are about particular events. Claims 1, 3, and 5 are particular causal claims. General causal claims are just like generalizations we’ve studied earlier. They are inferences from samples to more general conclusions. Claims 2, 4, 6 are general causal claims.

We can use the method we are about to learn to evaluate both kinds of causal claims. The only difference is that for general causal claims also need to do what we do for any kind of generalization: We need to evaluate sample size, representativeness, look for selection bias and other measurement errors. Once we’ve done that, the rest of method is the same for both kinds of claims.

Before moving to the method I want to briefly point out one other distinction between kinds of causal claims: physical vs non-physical. Non-physical causal claims include claims that are behavioral, psychological, sociological, political or economic. Claims 1 and 2 are physical causal claims. The others are non-physical. The important (general) difference between the two is this: With physical causation the cause (usually) occurs before the effect. Think billiard balls. If one ball moves (the effect) it’s because at a slightly early point in time another ball hit it (the cause). With non-physical causal claims,(3, 4, 5, 6) causation doesn’t have to have a temporal order. It can be simultaneous because these types of explanations and claims usually have to do with background conditions that cause behavioral/psychological/social/economic effects. Background conditions needn’t be physical (although they can be). Regardless of whether we’re evaluating physical or non-physical causes, the same method will apply. As we will see, the main difference will be the kinds of errors we can make in our evaluation in distinguishing causation and correlation.

The Basic Method for Evaluating Causal Claims

All causal claims imply the following structure:

(P1) X is correlated with Y.
(P2) The correlation between X and Y is not due to chance (i.e., it is not merely statistical or temporal).
(P3) The correlation between X and Y is not due to some mutual cause Z or some other cause.
(P4) Y is not the cause of X. (Direction of causation).
(C): X causes Y.

A causal argument or explanation is strong to the degree that we are willing to accept (RRAR) each of the four premises (five if it’s a general causal claim). We can think of the evaluation method as a series of acceptability challenges a causal claim must pass. If it passes (P1) then it must pass (P2), and so on until it’s passed each test. In evaluating each premise I need to justify why or why not the premise should be accepted or rejected. Perhaps the best way to see how to apply the method is to work through an example:

Example Argument: The MMR vaccine (X) causes a decrease in measles incidence rates (Y).
(P1) Taking the MMR vaccine is correlated with a lower incidence rate of measles in a population: When vaccination rates go up in a population, incidence rates go down. When vaccination rates go down in a population, incidence rates go up.
(P2) The correlation between taking the vaccine and lower rates of incidence is not due to chance or merely statistical. Proposed causal mechanism: Infectious diseases are spread via micro-organisms. Vaccines cause the immune system to produce antigens that bring about long-term resistance to contact with the associated micro-organism. Also applying Mill’s method’s suggests the MMR vaccine is the causal factor. Different morbidity rates in a population can be explained in terms of vaccination rates.
(P3) The correlation between vaccines and incidence rates are not due to some mutual or other cause. For example, greater sanitation, nutrition, and hygiene doesn’t explain all the changes in incidence rates pre and post vaccine since vaccines were introduced at different times but the other variables were all introduced at the same time.
(P4) Lower incidence rates don’t cause people to get vaccines in greater numbers.
(C) Therefore, the MMR vaccine causes lower incidence rates for measles.

Now that we have an example, let’s look more carefully at how we should systematically evaluate each premise.

Evaluating Premise 1: X and Y are Correlated
The method of agreement allows us to infer correlation: Whenever we see that one variable is associated with another, we can infer correlation. Those communities with low incidence of measles, mumps, and rubella (i.e., effect) have high rates of vaccine compliance (i.e., probable cause).

Recall, however, that Mill’s methods are inductive methods meaning they don’t (and can’t) on their own guarantee that a factor is the cause some effect. As we know by now, this isn’t a weakness, it is merely a fact about inductive arguments. The first step in evaluating any causal claim is to first establish correlation. If two variables aren’t at least correlated then they certainly aren’t causally related.

To establish correlation we might also appeal to the method of concomitant variation. In the MMR vaccine example we see rates of infection rise and fall in proportion to vaccination rates.

You will often hear people say “correlation doesn’t imply causation”–and they are correct–however, sometimes it does! This brings us to the method of evaluating Premise 2 for acceptability…

Evaluating Premise 2: The Correlation between X and Y Is not Due to Chance or Merely Statistical
Correlations are relatively common. The real work is to distinguish mere correlations from genuine causation. Consider some extremely closely correlated variables here. For example, the number of drownings in pools/year correlates almost perfectly with the number of Nicolas Cage movies per year. Does it follow that people drowning in pools causes Nicolas Cage to act in movies (or vice versa)? Probably not.

nickcagechart

The interesting question is, how do we know those two variables aren’t causally related but only statistically related? Answer: When there is no plausible causal mechanism linking them. A causal mechanism refers to the way in which a cause and an effect can plausibly be related. So, in order to accept Premise 2, we’ll need to identify a plausible causal mechanism. If there isn’t one, then we should reject the premise and therefore reject the conclusion. The correlation is probably just statistical rather than causal.

We can also distinguish correlation from causation by appealing to the joint method of agreement and difference. If we can show that Y is present whenever X is present and that when Y isn’t present X isn’t present either, we have good reason to suspect a causal rather than statistical relation.

Common Errors in Evaluating Premise 2:
1. Confusing Correlation for Causation or Cum hoc ergo proptor hoc fallacy (“comes with therefore because of”): This fallacy (usually just called “confusing correlation with causation”) is committed anytime someone doesn’t properly evaluate Premise 2. They attribute causation where there is probably only correlation. That is, they have only applied the method of agreement, not the joint method. Until we reasonably demonstrate causation by hypothesizing a reasonable causal mechanism and the joint method, we cannot accept Premise 2.

Example:

autism%20vs%20organic.jpg

Just because the rise of organic sales correlates strongly with autism rates it probably doesn’t mean that organic foods cause autism. We’d need to show a plausible causal mechanism and perhaps a controlled study (e.g., where there is no or lower autism rates there is little consumption of organic food).

2. Post hoc ergo proptor hoc (usually called “post hoc fallacy”): Post hoc ergo proptor hoc (“after therefore because of”) is a subspecies of the correlation/causation fallacy but has to do with temporal order. It usually applies to physical causes (but not always). Just because an event regularly occurs after another event doesn’t mean that the first event causes the second. When I eat dinner I eat my salad first, then my protein; but my salad doesn’t cause me to eat my protein.

Symptoms of autism become apparent about 6 month after the time a child gets their MMR vaccine. Because one event occurs after the other, many reason that the prior event is causing the later event. But as I’ve explained, just because an event occurs prior to another event doesn’t automatically mean it causes it. We need a plausible mechanism and — better yet — a controlled experiment (i.e., the joint method) to reasonably suggest causation.

Going back to our example, we can ask: why pick out as the cause of autism one prior event out of the 6 months worth of other prior events? Babies with autistic symptom eat many foods. Why not hypothesize one of the foods? And why ignore possible genetic and environmental causes? Or why not say “well, my son got new shoes 6 months ago (i.e., prior event) therefore, new shoes cause autism”? Until you can tease out all the possible variables, hypothesize mechanisms connecting them to the cause, and employ the joint methods, you can’t attribute causation merely because of temporal order.

Summary of Evaluating Premise 2: Identify a causal mechanism and/or use the joint method of agreement and difference to justify accepting the premise. If neither of these criteria can be met, we should reject the premise. Common errors include confusing correlation with causation (cum hoc ergo proptor hoc) and post hoc ergo proptor hoc fallacy.

Evaluating Premise 3: The Correlation between X and Y Is not Due to Some Mutual Cause Z or Some Other Cause.
Here’s what appears to be a causal relation: In humans up to about 20 years of age, shoe size correlates strongly with mathematical ability. The greater the shoe size (X), the higher the math scores (Y). What’s going on? Does having big feet (X) cause better mathematical reasoning (Y)? Probably not. Maybe something else (Z) is causing both phenomena (X and Y). If fact, something is causing both phenomena! As children get older and grow, both their feet and brain develop. Hence, there is a third variable (aging and associated development (Z)) that explain why (X) and (Y) correlate strongly.

Sometimes it can appear as though X is causing an effect (Y) when in fact there is a third factor that is causing both that variable (X) and the effect (Y)! In one of my favorite examples, there was a study showing a strong relationship between testicle size (X) and being a good father (Y). Realizing the there isn’t a direct plausible mechanism linking testicle size to parenting skills, researchers hypothesized that men with low testosterone levels (Z) are more likely to be better fathers (Y). The low testosterone levels (Z) plausibly explained both the smaller testes (X) and the fatherly behavior (Y). As we’ll see in the section on P4, the story is even more complicated but this should suffice to illustrate the point.

Another example: Suppose someone thinks that “muscle soreness (X) causes muscle growth (Y). I.e, You feel muscle soreness prior to them growing therefore the soreness must be causing the growth. This is a mistake because it’s actually exercising the muscle (Z) that causes both events.

Sometimes a third variable (Z) doesn’t explain the appearance of both (X) and (Y) but rather (Z) is a more general version of (X). For example, in social psychology there was an interesting reinterpretation of a study that demonstrates this third general cause principle. An earlier study had shown a strong correlation between overall level of happiness (Y) and degree of participation in a religious institution (X). The conclusion was that participation in a religious institution (X) causes happiness (Y).

However, a subsequent study showed that there was a third element — sense of belonging to a close-knit community (Z) — that explained the apparent relationship between religion (X) and happiness (Y). Religious organizations are often close-knit communities so it only appeared as though it was the religious element (X) that caused a higher sense of happiness (Y). It turns out that there is a more general explanation of which participation in a religious organization is only an instance: Belonging to a close-knit community. That is, churches, mosques, and synagogues are very common kinds of close-knit communities.

Summary of Evaluating Premise 3: Make sure that there isn’t a third factor (Z) that underlies the correlation between the proposed cause (X) and the effect (Y) or that there isn’t a more general version of (X) that explains effect (Y).

Evaluating Premise 4: Y Is not the Cause of X and Feedback Loops.
When we evaluate Premise 4 we are trying to establish the direction of causation. Figuring out the direction that the arrow of causation points can sometimes be very tricky for a variety of reasons, and sometimes it can point both ways. For instance, some people say that drug use (X) causes criminal behavior (Y). But in a recent discussion I had with a retired parole officer, he insists that it’s the other way around. He says that youths with a predisposition toward criminal behavior end up taking drugs only after they’ve entered a life of crime. That is, criminal behavior (Y) causes drug use (X). One might even plausibly argue the arrow can point both directions depending on the person or maybe even within the same person (i.e., feedback loop).

For one more example, let’s return to the case of the good fathers with small testes. It was originally hypothesized that low testosterone levels have an effect on men such that they become better fathers. That is, low testosterone (X) causes good fatherly behavior (Y). It turns out that the arrow of causation points the other way. Men who are good fathers wake up at night to take care of the baby. As such, they get little sleep causing their testosterone levels to fall. And so, the actual direction of causation is that being a good father (Y) causes low testosterone levels (X). These sorts of cases are tricky to figure out and require a bit of background knowledge about the subject. Applying the joint method of agreement and difference also helps.

Especially when it comes to social and psychological explanations of behavior, it’s not uncommon to find the arrow of causation pointing in both directions. For example, treating someone like a child can cause them to avoid taking responsibility for their life. But people who don’t take responsibility for their lives can cause other people to treat them like children. The causal arrow goes both ways and can easily end up creating a feedback loop.

Common errors: When we get the order of causation wrong we call this error “confusing cause for effect” or “mistaking the order of causation”. Philosophers spent many years coming up with those creative names.

Summary of Evaluating Premise 4: Check whether the direction of causation is reversed. You’ll want to provide evidence that this possibility is ruled out. In some cases, there’s no plausible mechanism for Y to cause X. Pointing this out should be sufficient to accept that the causal arrow runs only from X to Y. When there’s a feedback loop, then the causal claim isn’t incorrect, it’s just incomplete. It needs to include the fact that there’s a feedback between the two variables.

Summary of the Basic Method for Evaluating Causal Claims
If, after evaluation, we find all four premise acceptable (five if we’re generalizing from a sample) then we can reasonably conclude that X causes Y.

Premises 2, 3, and 4 are to a large degree about distinguishing correlation from causation and ruling out alternative explanations. As critical thinkers evaluating or producing a causal argument, we need to seriously consider the plausibility of these alternative explanations. Recall earlier in the semester we looked briefly at Popperian falsificationism and the nature of inductive arguments. We can extend this idea to causation: i.e., In many cases we can rarely completely confirm a causal relationship but we can eliminate competing explanations. In doing so, we can reasonably attribute causation to whatever remaining variable is most likely to be true.

With that in mind, the implied premises in a causal claim provide us a systematic way to evaluate the claim in steps so we don’t overlook anything important. In other words, when you evaluate a causal claim, you should do so by laying out the implied structure of the argument for the claim and evaluating each premise in turn.

Summary of Most common Ways that People Can Reason Incorrectly about Causation
(there are more but these are the most common):

1. Post hoc ergo proptor hoc (after therefore because of)—usually referred to just as “post hoc fallacy”). This is known as confusing causation with temporal order. Just because Y happens after X it doesn’t follow that X causes Y. For example, everyday I eat peanut butter and toast for breakfast then go to work. It doesn’t follow that eating peanut butter and toast causes me to go to work. This error applies to (P2).

2. Misidentifying the Relevant Causal Factor(s): For any given general causal relationship there are often hundreds of factors common to each causal event. It does not follow that they are all relevant. This is why it’s important to hypothesis a (possible) causal mechanism. For example, suppose you go out to dinner with 8 friends, 3 of which got sick a few hours after eating. It turns out that the 3 friends are all male. If you were to conclude that they got sick because they are all male this would be to misidentify the relevant causal factor. It seems unlikely that there is a causal relationship between their gender and their illness. This common variable is irrelevant. More likely their illness has to do with what they ate or drank in common. This reasoning error is usually a consequence of not having very deep knowledge of the topic at hand. A little wikipedia research can usually at least get you started in the right direction. This error applies to (P2), and (P3).

3. Mishandling Multiple Factors: As with identifying relevant causal factors, for every general causal argument there will often be many antecedent variables involved. Identifying the one that has causal import can be tricky. Again, as in above, you want to find ways to falsify competing alternatives. Also, people will often fail to consider alternative causal variables to the one(s) they identify. Again, a little wiki-reseach gets you started. This error applies to (P2), and (P3).

4. Confusing Correlation and Causation (Cum hoc ergo proptor hoc=”comes with therefore because of”: Just because two events or variables are correlated or co-occur, it doesn’t follow necessarily that there’s a causal relationship. For example, just because there’s a correlation between sales of organic foods and autism rates, it doesn’t follow that there’s a causal relationship between the two. Often a good way to avoid committing this error is to see if you can come up with a likely causal mechanism. If you can’t then it’s likely simply correlation. However, you could be wrong, so do a little digging online just in case. This error applies to (P2).

5. Confusing Cause and Effect (aka Direction of Causation). Often it is difficult to disentangle the direction of causation. For example, does participation in high school sports cause the development of a good work ethic and perseverance or do people with a good work ethic and perseverance have a greater tendency to do sports? This error applies to (P4).

6. No Control (see Method of Difference). Often misattributions of causation occur because there is no control group. If we don’t know the natural prevalence rate of a disease or its average natural healing time we cannot reasonably attribute causal power to a purported remedy. The same goes for social policy interventions. Being able to compare an intervention group to a non-intervention group improves our ability to attribute (or dismiss) causation to the intervention. Applying a control helps eliminate errors in (P1), (P2), (P3), and (P4).

Spring 2017: Skip this section for now

Bonus: Conspiracy Thinking and Causation 

A Note on the Nature and (Mis)use of Causal Arguments in Science Denialism and Conspiracy Thinking

It’s important to remember that causal arguments are inductive arguments. This means that by their very nature they are probabilistic and incapable of offering 100% certainty. But, as you should know by now, this doesn’t mean that causal arguments are inherently bad arguments. It simply means we need to recognize their probabilistic nature which means evaluating their strength relative to competing hypotheses (RRAR). This is the heart of the error that pseudoscientists, science denialists, and conspiracy theorists make. They fail to evaluate causal claims relative to competing claims in terms of their likelihood of being true. Their analysis ends after (P1).

The science denialist will argue that, for example, you can’t prove 100% that greenhouse gases are causing global warming. And they are correct! …But not in any meaningful way. They are merely noting a fact about all scientific arguments about causation: They are inductive and so none of them are capable of 100% proof.

This type of reasoning commits (at least) two kinds of errors. The first, we are familiar with: there is misplaced burden of proof. Given that there is a consensus of experts (and overwhelming positive evidence) for the claim, the likelihood of the claim being true is quite high and so the burden of proof correctly falls on anyone wishing to deny that greenhouse gases cause global warming. (You can substitute any issue where people argue against a consensus of experts: creationism, anti-vaxxers, people who claim GMOs are harmful to human health, flat earthers, 9-11 “Truthers”, etc…)

Recall also that since causal arguments are probabilistic they should be evaluated relative to competing hypothesis that also purport to explain the phenomena in question. For example, there is no competing hypothesis that, to a higher degree of probability, explain the phenomena of global warming. To reasonably deny that greenhouse gases cause global warming requires offering a competing hypothesis that is more likely to be true than the consensus hypothesis. So, the second error of the denialists reasoning is to believe the less probable hypothesis over the more probable hypothesis. It is more probable that greenhouse gas emissions are causing global warming than any other competing hypothesis — even if it can’t be proven 100%. So, the reasonable thing is to believe the hypothesis that is most likely to be true (given current evidence).

Let’s decontextualize the issue to show why this is true. If hypothesis A is 10% likely to be true and hypothesis B is 90% likely to be true, what is the reasonable thing to believe? Now, of course, it still remains true that B could turn out to be false (but only given new and better evidence to the contrary). However, pointing out possibility doesn’t make it any more reasonable to instead believe what is only 10% likely to be true…because hypothesis A is 90% likely to turn out false. That is, the same reason offered to disbelieve hypothesis B is an even stronger reason to disbelieve a hypothesis that is only 10% likely to be true.

The growing popularity of conspiracy theories across all political views and demographics results from the combination of the nature of social media and our cognitive biases. Humans are hard-wired to find causal patterns in the world. Imagine what life would be like if you couldn’t make causal inferences. Even something as simple as hypothesizing why your hand gets burnt every time you put it on a hot stove would be a mystery. You’d have to retest it each time you walked by the stove. Evolution doesn’t favor the inability of recognize patterns. Millions of years of evolution have favored brains (in both humans and animals) that look for patterns. The problem is that our brains are imperfect. Sometimes we attribute causation where there is only correlation or we see patterns where there is no pattern. At its heart, conspiracy thinking attributes causation where there is none, or at least where there is no good evidence of causation.

The recipe for a conspiracy theory goes like this. Take any event (usually a bad event), figure out which group(s) benefit from that event — preferably a group with power and or that is disliked — and declare that since that group benefited they must have caused the event. Throw in some unexplained details (at least unexplainable to you), make circles and arrows on picture, and voila! You have your conspiracy. Be sure to post on social media with the caption “WAKE UP SHEEPLE!!!!!” or “FOLLOW THE MONEY!!!!” so people realize how “woke” you are compared to them.

Let’s build a conspiracy to show just how easy it is. Here’s the fun part: There isn’t a single event in the world — no matter how tragic — that some person or group won’t in some way benefit from. Consider an earthquake. The companies that go rebuild and sell emergency supplies are going to benefit. The charities that help will get increased donations. Therefore, they caused it? See? Without positive evidence it’s a terrible line of reasoning. Yet this is exactly how conspiracism works.

Recently, any time there has been a school shooting some conspiracy theorists have claimed “false flag! false flag!” because they believe (probably correctly) that public support for gun regulation will increase if there are shootings in schools. But the fact that government is more likely to follow public desire to limit access to firearms in the face of school shootings doesn’t mean that the government caused the shootings. To show that they did, as we have learned, would require a lot of positive evidence. Not just pointing to what at first glance appear to be anomalies. Causation requires positive evidence otherwise you’re merely pointing out what is true of any event: some people will benefit.

Notice also that we could select more than one group that benefits from gun rights advocates thinking the government will take away their guns. In fact, the group that most benefited from the conspiratorial hysteria were gun manufacturers and gun store owners. Yet gun rights advocates never suggested that those groups caused the school shootings. Hmmm…Coincidence? What are they hiding????1??1 FOLLOW THE MONEY!!!111!!!!1!!!

Formula for Building a Conspiracy
Step 1: Find any negative or tragic event.
Step 2: Figure out which person or groups might benefit in some way from that event.
Step 3: Of those groups/people, pick one that is either powerful or socially unpopular or both.
Step 4: Point out how your chosen group benefits from the event.
Step 5*: Suggest either subtly using innuendo or directly that since the group benefited they must somehow be behind the event.
Step 6: Using a paint editor, draw circles and arrows on pictures to indicate what appear to be anomalies given your limited background information and only indirect familiarity with the event.
Step 7: End with a catchy phrase to show everyone that you know the TRUTH. Popular examples include: “Wake up sheeple” and “Follow the money”

At Step 5, using innuendo is a great preemptive tactic against people who eventually ask you to support your claim with positive evidence. When they ask, just say “hey, I’m not saying for sure that it happened, but it’s possible. I’m just asking questions.” This will allow you to back-peddle and seem like a reasonable person who is woke enough to question the “official” story. Of course, you have no positive evidence for your suspicions, but why should that stop you from insinuating a nefarious plot?

Caveat on conspiracy thinking: The important lesson here isn’t that all conspiracy theories are false or that there aren’t and never have been any genuine conspiracies. The lesson concerns inference to the best explanation, burden of proof, and the requirement to provide positive evidence to support a causal claim. There are and have been real conspiracies throughout history. However, these conspiracies were uncovered because — just like for any other causal claim — there was positive evidence for the conspiracy.

Also, conspiracies are generally quite difficult to keep covered up. Have you ever tried to get more than 4 people to keep a secret? As the number of people involved in a conspiracy grows, the likelihood of its existence decreases since it would be so hard to keep it covered up. For a mathematical model of the relationship between the number of people hypothesized to be in conspiracy and the likelihood of keeping that conspiracy secret click here.

 

Homework:

I. (a) Identify the proposed cause and effect. (b) Suggest alternative explanations. (c) Suggest at least one premise of a standard form causal argument which the original explanation fails and name the error (I.e., post hoc ergo proptor hoc, cum hoc ergo proptor hoc, misidentifying the relevant causal variable, confusing the direction of causation, mishandling multiple causal variables/no control). Some will fail more than one.

Example:

Frank reads a lot and wears glasses. Therefore, reading a lot must cause permanent eyesight damage.

(a) Frank losing eyesight=effect. Reading a lot=cause.

(b) Genetic factors might also explain why Frank’s eyesight is poor.

(c) This argument would fail Premise 2 because it commits confusing causation with correlation (cum hoc ergo proptor hoc).

1. Mary is feeling sick. She ate about 2 hours ago, it must be something she ate.

2. Did you hear about the pipeline rupture last week? I’ll bet those anti-pipeline protestors sabotaged it in order to make everyone afraid of pipelines.

3. I feel hungry. That explains why I feel so tired.

4. People who exercise regularly do better in school.

5. He got his well-paying job because he dresses well.

6. Violent video games cause children to be violent.

7. You shouldn’t smoke the pot. My friend smoked the pot and now he has a mood disorder.

8. A: Why do you keep gripping the door handle and acting nervous when I’m driving?

B: Because you’re a bad driver.

9.  Bob sexually harassed his co-worker because he watches pornography.

10. Women who gain weight don’t get pregnant. [hint: what can cause both effects?]

11. People aren’t buying electric vehicles because there aren’t enough conveniently located charging stations.

II. Putting it All Together

Example:

A supplement company hires a lab to figure out whether Mega MyoPump 3000 Pre-Workout Insanity Blast helps people make gainz. To conduct the study they recruit a cohort of 100 participants by placing flyers around BG. To control for diet, the researchers require that each participant eat the same number of calories. Also, each participant is required to follow the same workout plan.  Over the first 6 weeks, 20% drop out, however, those who stayed in the study gained an average of 5 lbs. The lab concludes that taking Mega MyoPump 3000 Preworkout Insanity Blast causes an average gain of 5lbs.

(a) Suggest how the selection method might affect the results.

(b) Apply the 4 steps of evaluation for causal claims and describe any measurement errors or which premise the argument might fail (and why).

(i) If there is no control, explain why this is a problem and what the control should be to show causation.

(ii) Identify which common error the study probably commits (see Question I for list)

(c) If results are averaged across a group, explain why this might be a problem.

(d) If there is attrition, explain how this might be a problem.

Answer

(a) The selection method isn’t random because there’s self-selection. People entering the study will likely be more motivated than the general population to make gainz. This might not be a major problem considering that most people buying the product might have similar levels of motivation. It will be a problem if most of the people entering the study are beginners because they will likely make gainz regardless of the preworkout.

(b)

P1. There is a correlation between taking the pre-workout and gainz. The per person average was 5lbs.

P2. The correlation might not be causal because

(i) there is no control group so we don’t know if the people entering the study would have made similar gainz with just the diet and workout. A controlled study would have 2 groups on the same diet and workout program but give only one group the pre-workout.

(ii) This study commits cum hoc ergo proptor hoc (assumes mere association is causation) and mishandles multiple variables (doesn’t take into account the effects of diet and workout plan).

No need to do P3 and P4 since it already failed at P2.

Other possible measurement errors: All we know is that some participants gained 5lbs. We don’t know what the composition of those 5 lbs is. Could be fat…or fat for some muscle for others. We just don’t know. Also the diet just specifies number of calories not composition of the diet. Composition of diet could have an effect.

(c) The averaging is a problem because the study doesn’t control for level of experience. Some outliers might have gained 10lbs (people who never worked out before) whereas people with a lot of experience might not have gained anything. The average could still be 5lbs (given equal numbers of both groups).

(d) Attrition is a problem because it’s likely the people that weren’t showing any results that dropped out while those that it worked for stayed in. This would make the average artificially high.

Questions 

1. Gary the Guru has been selling his motivational talks all over the country. He claims that everyone who has attended his talk ends up motivated and depression-free. When asked for proof that his method leads to long-term motivated and depression-free people Gary the Guru says this: “When I talk to people after my talk they are very motivated and feeling optimistic about life. That’s how I know it works.

(a) Explain how selection bias may lead to measurement errors.

(b) Evaluate Gary the Guru’s causal claims by systematically testing the 4 premises in a causal claim.

(i) If there is no control, explain why this is a problem and what the control should be to show causation.

(ii) Identify which common error the study probably commits (see Question I for list)

(c) If results are averaged across a group, explain why this might be a problem.

(d) If there is attrition, explain how this might be a problem.

2. In order to prove that organic all-natural twig tea cures the common cold, the company selling it conducts the following study. To recruit participants, they put up signs at local hospitals and clinics for looking for people with colds. When people come in, they tell them to get lots of rest and to drink the tea 3x a day. The average participants cold only lasts 7 days. The study concludes that their propriety blend organic all-natural twig tea cures the common cold.

(a) Suggest how the selection method might affect the results.

(b) Apply the 4 steps of evaluation for causal claims and describe any measurement errors or which premise the argument might fail (and why).

(i) If there is no control, explain why this is a problem and what the control should be to show causation.

(ii) Identify which common error the study probably commits (see Question 1 for list)

(c) If results are averaged across a group, explain why this might be a problem.

(d) If there is attrition, explain how this might be a problem.

 

Spring 2017 Skip III and answer IV 

III. Build Your Own Conspiracy!

Formula for Building a Conspiracy
Step 1: Find any negative or tragic event.
Step 2: Figure out which person or groups might benefit in some way from that event.
Step 3: Of those groups/people, pick one that is either powerful or socially unpopular or both.
Step 4: Point out how your chosen group benefits from the event.
Step 5*: Suggest either subtly using innuendo or directly that since the group benefited they must somehow be behind the event.
Step 6: Using a paint editor, draw circles and arrows on pictures to indicate what appear to be anomalies given your limited background information and only indirect familiarity with the event.
Step 7: End with a catchy phrase to show everyone that you know the TRUTH. Popular examples include: “Wake up sheeple” and “Follow the money”

At Step 5, using innuendo is a great preemptive tactic against people who eventually ask you to support your claim with positive evidence. When they ask, just say “hey, I’m not saying for sure that it happened, but it’s possible. I’m just asking questions.” This will allow you to back-peddle and seem like a reasonable person who is woke enough to question the “official” story. Of course, you have no positive evidence for your suspicions, but why should that stop you from insinuating a nefarious plot?

Choose your own tragic event from the news or use some of these suggestions: an airplane crash, the local water supply gets contaminated, a prominent supreme court justice dies, the older buildings in the university burn down, a political columnist gets hit by a car, Ami’s lecture notes vanish from Canvas, etc…

IV. Critical Thinking in the Real World

As part of the war on drugs, Black and Hispanic citizens (especially men) are subject to stop and frisk at disproportionately higher rates than white citizens. Black and Hispanic drivers are also pulled over at disproportionately higher rates as part of the war on drugs.

Police and others explain their behavior by pointing to incarceration rates. That is, the fact that more Blacks are arrested and convicted for drug crimes causes police officers to scrutinize them more. For every white male in prison there are 6 Black males, about 60% of which are there for drug crimes (mostly minor). It turns out, however, that Blacks and whites use and sell drugs at approximately the same rate (within a percent). Think about direction of causation and feedback loops to explain why it is both true that Blacks and hispanics are arrested and convicted of drug crimes at higher rates than whites even though there’s no difference in rates for use or selling.

Advertisements