jump to navigation

Essential tools to think with December 18, 2010

Posted by Ezra Resnick in Education, Reason, Science.
Tags:
trackback

Despite appearances, the two tabletops are identical in both size and shape.

I’m preparing a special lesson for my high school students on the subject of critical thinking and the scientific method. In the first half of the lesson, I aim to undermine the students’ certainty about what they think they know, by demonstrating the many types of errors and biases we are all prone to. I’ll start with optical illusions, like Roger Shepard’s “Turning the Tables.” Carefully measuring the two tabletops shows that they are identical in both size and shape, even though we feel very strongly that this is not the case.

Exactly one of the doors has a new car behind it. After you choose door 1, the host opens door 3 to reveal a goat. He then offers you the chance to switch to door 2. Should you?

We know we cannot always trust our senses, but surely our intuitions are better in more theoretical areas? Try this: if a bat and a ball together cost $1.10, and the bat costs $1.00 more than the ball, how much does the ball cost? Sounds easy, and it is, but a majority of university students give the wrong answer (hint: it’s not 10 cents). Our intuitions are especially bad when it comes to probability. I’ll present the famous Monty Hall problem, to which many smart people refuse to accept the correct solution (switching doors doubles your chances of winning) even after it’s explained to them. I may also mention the gambler’s fallacy — if ten consecutive tosses of a fair coin come up heads, is tails more likely to come up next? — which makes lots of money for casinos.

Instead of jumping to conclusions based on intuition, we can attempt to construct formal logical inferences: if all Greeks are handsome, and Socrates is a Greek, then Socrates is handsome. Do we ever make mistakes in our use of logic? Consider this: if some men are doctors, and some doctors are tall, can we conclude that some men are tall? (No.) Or this: if we assume that all reptiles lay eggs, and we know that alligators lay eggs, does it follow that alligators are reptiles? (Nope.)

I next turn to the fallacy of assuming that correlation implies causation. If the percentage of black people among those convicted of violent crimes is significantly greater than the percentage of blacks in the population, can we conclude that blacks are inherently more violent? No: it could be that most judges are white and some are prejudiced, or that blacks are poorer on average and poverty causes crime, or that blacks are treated as second-class citizens creating a self-fulfilling prophecy, etc. Another example: let’s say surveys show that people who use olive oil are less likely to develop heart disease than people who use other oils. Does it follow that olive oil helps prevent heart disease? Consider that olive oil is more expensive than other oils, so people who buy olive oil are more likely to belong to higher socio-economic groups — implying a more healthy diet in general, a higher chance of belonging to a gym, more money to spend on health care, etc. Of course, this does not prove that olive oil does nothing to prevent heart disease: merely that the causal connection cannot be deduced from the correlation alone.

Perhaps the most subtle causes of error are cognitive biases. I’ll talk about confirmation bias: people tend to give more weight to evidence that supports what they already believe; they tend to seek data that confirm their hypotheses instead of attempting to disprove them; they tend to remember examples that support their theories and forget those that don’t. One study was conducted on two groups of people: one group contained people who were in favor of capital punishment, and the other group contained people who were against it. All subjects were shown the same set of data, which included evidence and argument for both sides of the issue. Participants from both groups tended to report that the data had caused them to strengthen their original beliefs! So much for objectivity. Confirmation bias is also what keeps many superstitions alive: people notice those times when unlucky events happen on the 13th floor, and disregard the times when they don’t, or when they happen on other floors.

I’ll conclude the first part of the lesson with the fallacy of appealing to authority: “Einstein was a genius, so whatever he said must be so;” “If it’s written in the Bible, it must be true;” “Democracy is the best form of government, because that’s what they taught us in school;”  “My teacher says that appeals to authority are logical fallacies.” The truth or falsity of a claim is not affected by the authority of the claimant; even Einstein made mistakes.

So far, then, we’ve seen that our senses can deceive us; our intuitions are often wrong; we are prone to logical fallacies and cognitive biases; it’s difficult for us to be objective; and even smart people can be mistaken. Is it impossible to obtain reliable information about the world?

Enter the scientific method. I’ll present the basic model of gathering evidence, offering a hypothesis to explain the observed phenomenon, making predictions based on the hypothesis, testing those predictions, and revising the hypothesis based on new data. This method is not infallible, of course, but science makes use of many mechanisms for minimizing errors and correcting them: transparency, documentation, reproducibility, peer review, etc.

To further explore the nature of scientific theories, I’ll use Carl Sagan’s example of the fire-breathing dragon in my garage: when my friend asks to see it, I reply that it’s invisible. She then suggests that we spread flour on the floor and look for tracks, but I explain that this dragon floats in the air. And there’s no point in trying to touch it, either, because it’s incorporeal. At this point my friend would hopefully begin to wonder what makes me think the dragon exists at all. The dragon hypothesis is unscientific because it’s unfalsifiable — there is no evidence that could possibly disprove it. This makes it useless: if there could never be any detectable difference between a world in which the dragon exists and one where it doesn’t, why should we care?

Another useful heuristic for judging scientific theories is Occam’s razor: all other things being equal, the simplest explanation of the facts is usually the right one. In other words, we should strive to minimize unnecessary or arbitrary assumptions. If I hear the clacking of hoofs coming from inside a race track, for instance, it could theoretically be a zebra escaped from the zoo, or a recording designed to fool me, or an alien language — but absent any evidence for those hypotheses, it makes sense to tentatively assume that it’s horses I’m hearing. It’s important to stress that all scientific knowledge is provisional: we can never achieve absolute certainty, but our confidence in an hypothesis grows with the amount of supporting evidence. A scientific theory is a hypothesis of sufficient explanatory power which has withstood all attempts to falsify it. But all theories are always open to revision based on new evidence.

Homology of forelimbs in mammals is evidence of evolution.

I’ll give two examples of successful scientific theories and the evidence supporting them. Firstly, how do we know the Earth is spherical? Thousands of years ago, people had already noticed that the stars in the night sky look different from different locations, that the sails of a ship can be seen on the horizon before its hull, and that the shadow cast by the Earth on the moon during a lunar eclipse is round. More recently, of course, people have circumnavigated the globe and even seen it from space. My second example is the theory of evolution, supported by evidence from fossils, comparative anatomy, the geographical distribution of plants and animals, genetics (DNA), artificial selection (e.g. dog breeding), observed natural selection (e.g. antibiotic-resistant bacteria), and more.

Let’s now try to apply the scientific method to medical testing. Does the fact that my condition improved after taking a certain drug or undergoing a certain therapy mean that the drug or the therapy are inherently effective? No: we must take into account the placebo effect, confirmation bias, the possibility of coincidence, etc. We can, however, attempt to neutralize those factors — and raise our confidence in a treatment’s effectiveness — by performing controlled double-blind trials, and analyzing the results statistically.

Having previously rejected appeals to authority, it’s important to point out the difference between authority and expertise. While the statement “90% of mathematicians recommend Acme toothpaste” does not carry any special weight, the statement “90% of dentists recommend Acme toothpaste” does. When we (provisionally!) accept the consensus of experts on matters of fact in their field of expertise, we are not doing so merely on the basis of their authority: we are relying on the scientific method itself, which includes all the self-correcting and error-minimizing mechanisms mentioned above.

The first smallpox vaccine was developed in 1796; smallpox was eradicated in 1979.

The strongest argument in favor of the scientific method is that it gets amazing results. Science can land a spacecraft on Mars, and can predict the exact time of an eclipse a thousand years in the future. Closer to home, the smallpox virus — which killed hundreds of millions of people over 10,000 years — was eradicated in 1979 after a successful vaccination campaign, the culmination of centuries of scientific effort. We often take such things for granted, but until the beginning of the 20th century, human life expectancy was only 30-40 years. It’s now around 80 years in the developed world.

This is a good place to give the students a chance to practice their own critical thinking skills on real-world examples, like the assertions of faith-healers or astrologers. What questions should we ask before accepting such claims? Are there any fallacies or biases that may be getting in the way? What data support these hypotheses, and are there more parsimonious ways of explaining them? What experiments could we perform to help us decide?

In summation: it’s essential to evaluate all ideas critically. The smartest people we know could be wrong; we should never accept something just because someone said so, or because it’s tradition, or because it feels right intuitively. Our level of confidence in a proposition ought to scale with the level of available evidence in its support. Being skeptical of extraordinary claims is a good default position — but we must also make sure to keep an open mind and be willing to consider strange and unintuitive ideas (quantum physics, anyone?). We must recognize that there are many things we do not know, and that some of what we think we know may be mistaken. It is possible, however, to expand our knowledge of the world, and to correct previous mistakes, using the scientific method.

I’ll conclude with an excerpt from Carl Sagan’s The Demon-Haunted World:

Except for children (who don’t know enough not to ask the important questions), few of us spend much time wondering why Nature is the way it is; where the Cosmos came from, or whether it was always here; if time will one day flow backward, and effects precede causes; or whether there are ultimate limits to what humans can know. There are even children, and I have met some of them, who want to know what a black hole looks like; what is the smallest piece of matter; why we remember the past and not the future; and why there is a Universe.

Every now and then, I’m lucky enough to teach a kindergarten or first-grade class. Many of these children are natural-born scientists — although heavy on the wonder side and light on scepticism. They’re curious, intellectually vigorous. Provocative and insightful questions bubble out of them. They exhibit enormous enthusiasm. I’m asked follow-up questions. They’ve never heard of the notion of a ‘dumb question’.

But when I talk to high school seniors, I find something different. They memorize ‘facts’. By and large, though, the joy of discovery, the life behind those facts, has gone out of them. They’ve lost much of the wonder, and gained very little scepticism. They’re worried about asking ‘dumb’ questions; they’re willing to accept inadequate answers; they don’t pose follow-up questions; the room is awash with sidelong glances to judge, second-by-second, the approval of their peers. . . .

There are naive questions, tedious questions, ill-phrased questions, questions put after inadequate self-criticism. But every question is a cry to understand the world. There is no such thing as a dumb question.

Bright, curious children are a national and world resource. They need to be cared for, cherished, and encouraged. But mere encouragement isn’t enough. We must also give them the essential tools to think with.

I won’t be giving the lesson for a while yet, so I’d be happy to hear any comments, criticisms and suggestions.

Advertisements

Comments»

1. saar - December 18, 2010

cool stuff, i understand you plan en entire semester course … if it for one lesson you might want to reconsider …

2. Ezra Resnick - December 19, 2010

Based on a suggestion from richarddawkins.net, I intend to show the famous “selective attention test” video during the discussion of optical illusions.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s