In-Class Lecture on Science vs Pseudoscience

 

This lecture provides a variety of examples of pseudoscience and the errors that lead to them.  Have students identify the errors in reasoning or methodology for each.

Introduction

Michael Shermer: Why People Believe Weird Things TED Talk: Link

Self-Deception and Testing

Expectations and Self Deception:

  1. The tasters in the first experiment, the one with the dyed wine, described the sorts of berries and grapes and tannins they could detect in the red wine just as if it really was red. Every single one, all 54, could not tell it was white. In the second experiment, the one with the switched labels, the subjects went on and on about the cheap wine in the expensive bottle. They called it complex and rounded. They called the same wine in the cheap bottle weak and flat.

    Another experiment at Cal-Tech pitted five bottles of wine against each other. They ranged in price from $5 to $90. Similarly, the experimenters put cheap wine in the expensive bottles — but this time they put the tasters in a brain scanner. While tasting the wine, the same parts of the brain would light up in the machine every time, but with the wine the tasters thought was expensive, one particular region of the brain became more active. Another study had tasters rate cheese eaten with two different wines. One they were told was from California, the other from North Dakota. The same wine was in both bottles. The tasters rated the cheese they ate with the California wine as being better quality, and they ate more of it. Link

Self-Deception in clinical trials

Therapeutic Touch, Non-Falsifiability, and Motivated Reasoning

 

Dousing, Double-Blinding, the Ideomotor Effect, Base Rate, and Affirming the Consequent

  1. Chi power fails:

What’s the Harm?

  1. With California in the grips of drought, farmers throughout the state are using a mysterious and some say foolhardy tool for locating underground water: dowsers, or water witches. The nation’s fourth-largest winemaker, Bronco Wine, says it uses dowsers on its 40,000 acres of California vineyards, and dozens of smaller farmers and homeowners looking for wells on their property also pay for dowsers. Nationwide, the American Society of Dowsers boasts dozens of local chapters, which meet annually at a conference.California Farmers
  2. Facilitated Communication: Rom Houben
  3. This week businessman James McCormick was convicted of fraud after making £50m selling fake bomb detectors to security forces in Iraq and many other countries around the world. The detectors were said to work in a similar way to dowsing rods and were claimed to detect explosives up to one kilometre below the ground. Even more incredibly, they could apparently be used to locate drugs, people, elephants – even $100 bills. They didn’t work and, in all probability, hundreds of lives were lost as a result of misplaced trust in the phony devices.Ideomotor effect bomb detectors and liver disease detectors.

Nocebo and Placebo

Placebo Effect:  A placebo is an inert substance that creates either a positive response or a negative response in a patient who takes it. The phenomenon in which a placebo creates a positive response in the patient to which it is administered is called the placebo effect.

“We found little evidence in general that placebos had powerful clinical effects. Although placebos had no significant effects on objective or binary outcomes, they had possible small benefits in studies with continuous subjective outcomes and for the treatment of pain. Outside the setting of clinical trials, there is no justification for the use of placebos.”

[IS THE PLACEBO POWERLESS? A randomized of Clinical Trials Comparing Placebo with No Treatment N Engl J Med, Vol. 344, No. 21 • May 24, 2001]

Nocebo Effect:  In medicine, a nocebo (Latin for “I shall harm”) is a harmless substance that creates harmful effects in a patient who takes it. The nocebo effect is the negative reaction experienced by a patient who receives a nocebo.

What should we do about placebo and nocebo in constructing trials?

Placebo and Acupuncture

Memory and Confirmation Bias: https://aeon.co/ideas/bad-thoughts-can-t-make-you-sick-that-s-just-magical-thinking (Links to an external site.)

How to Generate Positive Results in Clinical Trials: You Can Always Find What You’re Looking For

1. Activity: Coin toss.

  • Everyone flip a coin ten times. As a class we only publish the outliers.
  • This is the problem with relying on a single study or small sample size.
  • File drawer effect deprives us of the larger sample
  • Correcting the file drawer effect: Mandatory reporting for trials before they begin.

2.Surveillance bias:

The more you look, the more you find. It occurs when some patients are followed up more closely or have more diagnostic tests performed than others, often leading to an outcome diagnosed more frequently in the more closely monitored group. link
link

Examples:

Paralysis in India

Autism Rates

Pareidolia

  1. Pareidolia on Mars
  2. Images
  3. Rotating Mask Illusion (higher cognition over-ride)
  4. Michael Shermer on pardolia starts at 8:10 Link

How To Be A Scientific Skeptic:

If it seems too good to be true, it probably is.
Natural News on Milk Thistle (Read comments for fallacy fest)

Bad studies:
Green coffee bean extract

Faith And Healing, And Regression to the Mean:  As reported 
The study abstract 

Dosage-Response + Chemikillz!!11!!!1

chemikillz.jpg

Subway Bread/azodicarbonamide
http://www.alternet.org/food/500-other-foods-besides-subway-sandwich-bread-containing-yoga-mat-chemical 
Quantity Matters: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1770067/
Ban Water! It kills!

Single Study Syndrome
http://scienceornot.net/2012/10/23/single-study-syndrome-clutching-at-convenient-confirmation/ 

… “single-study syndrome” … tends to turn up whenever a political agenda is threatened or supported by a specific line of scientific inquiry. “Some new finding, however tentative, gets highlighted while the broader suite of research on a tough subject is downplayed or ignored, …”

Red Flags of PseudoScience
http://scienceornot.net/science-red-flags/
Example of applying the concepts

X Cures Cancer but the Government is Hiding It!!!11!!
https://thelogicofscience.com/2016/07/04/if-cannabis-and-vitamin-b17-kill-cancer-why-arent-they-approved-by-the-fda-let-me-explain/ 

KEY CONCEPTS:
Elements of a Good Clinical Trial:
1)  Control Group: What success rate would we expect with chance? Unless you compare a treatment/intervention to chance/natural recovery rates there’s know way to measure a treatment’s efficacy.

2)  Double blinding:  You must prevent both researcher and subject bias.  Double blind is the best way to do this. (Actually, triple blind is, but don’t worry about that for now).

3)  A 3rd treatment Group:  With some interventions we know that just about any intervention will be better than no intervention (e.g., suicidal behavior) so rather that having only no treatment vs treatment you need to measure new treatments vs the current standard of care/drug.

4)  Objective outcome measures:  Subjective measures are often the product of psychological effects and therefore highly susceptible to cognitive bias.  It doesn’t mean they are irrelevant–how people feel is an important component of health, however, someone’s perception of attenuation of symptoms and whether the underlying cause is being treated are two different matters.  For this reason it’s important to have objective measures of efficacy (effect on tumor size, rate of viral or bacterial reproduction, effect on size of wound, etc…)

5)  Random sample: The sample of subject should be randomly selected. Avoid self-selection (particularly common in weight-loss and smoking cessation trials).

6)  Placebo control: The control group should be given a placebo that appears indistinguishable from the actual drug/intervention/treatment. Placebos reduce bias caused by expectation (both by the patient and the researcher).

Things to Look for when Evaluating a Study or the Report of a Study
1) Effect size:  If the effect size is small, it’s probably due to some kind of methodological bias in the study. Or it could simply be a statistical relation in the data set.

2)  Duration of effect: An intervention might have only a short-term effect but claim to have a long-term effect.  This type of problem is typical in weight-loss and alt-med trials.  Often interventions/treatments that only show a short-term effect are placebo; i.e., pretty much anything that is alt-med, supplement,  or “integrated” medicine.

3)  Type of study:  Was it a pilot study? A proof of concept study? An in vitro study? Animal study? Phase 1 clinical trial?  Phase 2 clinical trial?  Retrospective study?  Longitudinal study?  FDA trial?  The lower the level the study is, the greater the chance of positive effects but this doesn’t usually translate into efficacy at the human level under controlled conditions.  The more rigorous and well-controlled the study (i.e., FDA study) the more likely there is to be a smaller effect.

4)  Funding:  Be aware of the relationship between funding source, the research institution, and who gains from positive findings. A vested interest doesn’t mean the results are invalid but only that we should be extra skeptical–particularly if the study doesn’t make its data publicly available.

5)  Reporting: How are the results being reported in the media vs what does that actual abstract say?

6)  Context:  A single study carries very little weight and so single studies should be evaluated in the context of other similar studies.  For example, if a study shows a positive result but 90% of other similar studies show no positive results, we should dismiss the one positive study (assuming it is of equal rigor). Avoid single study syndrome (Links to an external site.).

7)  Meta-analyses:  Meta analysis are studies that combine all other similar studies on a topic to evaluate the overall trend. The rule of thumb for evaluating meta-analysis is “garbage in–garbage out”.  In other words, if most of the studies included in the meta-analysis are of poor quality then this will be reflected in the conclusion of the meta-analysis.  When evaluating meta-analysis always read the section on inclusion criteria. This will tell you what the quality benchmark was for them to include a study in the meta-analysis.

8) Replication:  Has the study been replicated using either the same methods or (even better) have the results been replicated using a different method?  If the answer is “no” then the results should not be viewed as definitive.  Avoid single study syndrome (Links to an external site.).

9)  Impact Number of the Journal:  One major problem that has arisen with the internet is that anyone with a computer and a few web design skills can start an “academic journal.” There are a significant number (and growing) of online journals that *look* like legitimate peer-reviewed journals but aren’t.  They are ideological platforms.  If you find a journal article that seems suspicious you should (a) look up the name of the journal in wikipedia to check its origins and (b) google the name of the journal plus “impact number”.  A journal’s impact number is it’s credibility rating.

10) Check the Citations. (a) Are there citations. If not, be extremely skeptical. (b) Are the citations from a credible source? (See: (9))  E.g., These are not good citations: natural news positive thoughts and healing (Links to an external site.)

Terminology
1) Ideomotor effect: The ideomotor effect has to do with the influence that suggestion has on involuntary or subconscious actions. In motor behavior, there are two parts to the brain activity. The first is the activity that results in the motor activity; the second is the registration of that activity in the conscious mind. The ideomotor effect happens when the second part, the conscious registration, is circumvented. RationalWiki (Links to an external site.)

2) Post-hoc (after-the-fact) rationalization:  People’s typically tendency to rationalize a belief in the face contravening evidence.

3) Placebo: A placebo is an inert substance that creates either a positive response or a negative response in a patient who takes it. The phenomenon in which a placebo creates a positive response in the patient to which it is administered is called the placebo effect.

4) Nocebo:  In medicine, a nocebo (Latin for “I shall harm”) is a harmless substance that creates harmful effects in a patient who takes it. The nocebo effect is the negative reaction experienced by a patient who receives a nocebo.

Signs of Bad Science

Here is a checklist made up of new and old concepts from the course:

Homework
1. (a) Pick one supplement/herbal remedy/alternative medicine treatment that you or one of your family members uses or one that you are curious about.

(b) Find a website or article that promotes that treatment and read what their supporting evidence is. Then go to quackwatch.com  and/or http://www.sciencebasedmedicine.org/ and search for the treatment.  (If your supplement/treatment doesn’t come up on one of those sites go to google and type in the name and the work “debunked”). Read the article:

(i) What does their interpretation of the evidence suggest?

(ii) In light of the concepts we’ve learned today and throughout the class, write a short 1/2 page summary of your findings.  Note: try not to focus too much on the issue of biases:  Focus on comparing the quality of evidence and arguments presented by both sides. In doing so, appeal to the Rough Guide to Spotting Bad Science.

2. How to Make a Fad Diet. Make your own fad diet by following this handy-dandy guide. (You can skip Step 5).