jump to navigation

Frames vs. reality October 27, 2013

Posted by Ezra Resnick in Economics, Math, Reason.
Tags:
2 comments

Suppose your household owns two cars, which are used equally: car A gets 8 miles per gallon of fuel, while car B gets 25. You have the opportunity to either trade in car A for a newer model that gets 10 miles per gallon, or you may trade in car B for a model that gets 50 miles per gallon. Which choice would save you more on fuel costs?

This seems like a no-brainer: trading in car A improves its mileage by only 2 mpg (25%), while trading in car B improves its mileage by 25 mpg (100%)! Just for fun, let’s use our brain anyway, and do the math. If each car drives 10,000 miles a year, then upgrading car A would save 250 gallons (consuming 1000 instead of 1250), while upgrading car B would save only 200 gallons (consuming 200 instead of 400) — so choosing to trade in car A would save you 25% more money!

frameHow could our intuition have been so wrong? The cause of the error (dubbed “The MPG Illusion” by psychologists Richard Larrick and Jack Soll) is in the framing of the question. We don’t really care about optimizing the distance we can drive on a fixed amount of fuel; we want to optimize the amount of fuel we consume for the distance we drive. Consider this alternative formulation of the above choice: you can either upgrade car A from .125 to .1 gallons per mile (saving .025 gpm), or upgrade car B from .04 to .02 gallons per mile (saving .02 gpm). This formulation is mathematically equivalent to the original, but they evoke opposite intuitions — which is quite disturbing, considering the widespread assumption that consumers (and policymakers) will reliably make choices that are in their own rational interests.

When comparing differences in fuel efficiency, it’s clear that one frame (gallons per mile) is superior to another (miles per gallon). This is not always the case, however, as shown by an example due to the economist Thomas Schelling. (Both the following scenario and the previous one are discussed in Daniel Kahneman’s Thinking, Fast and Slow.) Say we are designing a tax code, and are thinking of including a “child credit”: families with children will get a deduction on their taxes. Would it be acceptable for the deduction to be greater for rich families than for poor families? You probably answered with a resounding No.

Now, let’s think about it a different way. Giving a tax deduction to families with children arbitrarily designates a childless family as the default case, but we could just as well rewrite the tax code such that having children is the default case, and childless families would pay a tax surcharge. In that case, would it be acceptable for the surcharge paid by the childless poor to be as great as the surcharge paid by the childless rich? Again, you probably feel strongly that it would not.

The problem is that you cannot logically reject both proposals — since a surcharge that is smaller for childless poor families than for childless rich families is the same thing as a deduction that is smaller for poor families with children than for rich families with children. For instance, a surcharge of $500 for the childless poor and $1000 for the childless rich is equivalent to a deduction of $500 for poor families with children and $1000 for rich families with children.

The lesson is not that it’s impossible to design a tax code that burdens the poor less than the rich. The disturbing fact uncovered here is that our intuitions about fairness, like our intuitions about fuel efficiency, are unreliable: they can give contradictory answers to the same question depending on how that question is framed.

Kahneman’s conclusion is stark:

You have moral intuitions about differences between the rich and the poor, but these intuitions depend on an arbitrary reference point, and they are not about the real problem… Your moral feelings are attached to frames, to descriptions of reality rather than to reality itself.

Strong intuition is never a substitute for slow, careful analysis.

By way of contradiction May 5, 2013

Posted by Ezra Resnick in Logic, Math.
5 comments

contradictionAn indirect proof (or proof by contradiction) establishes the truth of a proposition by showing that the proposition’s negation implies a contradiction. For example, we can indirectly prove that the square root of 2 is irrational (i.e., it cannot be expressed as a ratio a/b where a and b are integers) by assuming the opposite — that √2 can be expressed as a ratio of integers — and showing that such an assumption leads to a contradiction (e.g., that b is both even and odd).

Some people find indirect proofs unsatisfying, or even a bit suspicious: it feels like we’ve been cheated out of understanding why the proposition is true. Direct proofs seem more intuitive and dependable. This raises the question: Does every proposition that can be proved indirectly have a direct proof as well? Or are there propositions that can be proved indirectly, for which no direct proof exists?

Before attempting to answer that question, let us first consider this humble proposition:

(p) This proposition cannot be proved directly.

We can prove proposition p is true — indirectly. Start by assuming the opposite, that is, assume there exists a direct proof of p. In particular, that means p is true. But p states that there is no direct proof of p — contradicting our assumption. So our assumption must be false; hence p is true.

Let us now attempt to prove the following proposition:

(q) Not all propositions that can be proved indirectly can also be proved directly.

We shall prove the truth of proposition q (you guessed it) indirectly. Assume the opposite: that is, assume any proposition that can be proved indirectly can also be proved directly. Then, since p can be proved indirectly (as demonstrated above), there must also exist a direct proof of p. However, the existence of such a proof contradicts (the proven) proposition p! So our assumption must be false — and q is true.

Ah, but the question remains: Can q be proved directly?

Some amusing (and meaningless) statistics October 8, 2011

Posted by Ezra Resnick in Computer science, Math.
add a comment

Google’s Ngram Viewer allows you to explore trends in the occurrence of chosen phrases in books over time. For instance, this graph shows the frequencies of the words “religion” (blue), “faith” (red), and “science” (green) in English books during the last two centuries:

This awesome technology can be a great research tool, of course; but allow me to amuse myself instead…

This is not a graph of the quadratic and exponential functions.

This is not a graph of the sine function.

This graph does not demonstrate Moore's Law.

Life-giving statistics April 16, 2011

Posted by Ezra Resnick in Math, Science.
Tags:
add a comment

The goal of science (and, presumably, of all rational beings) is to understand the world we live in. Unfortunately, our world is fraught with uncertainty, and our lives are often ruled by chance. We have therefore developed tools for quantifying uncertainty — allowing us to take it into account in our calculations. Statistics are all around us, and we try to use them when making our life decisions. It is well known, however, that human intuitions about probability are very bad — which is why it’s so important for everyone to be mathematically literate, including an understanding of basic probability theory.

One simple yet extremely useful tool that everyone should be familiar with is Bayes’ theorem:

Bayes’ theorem shows how to calculate the conditional probability P(A|B) — the probability that A will occur given that B has occurred — based on the inverse probability P(B|A), along with the unconditional (“prior”) probabilities of A and B. (The theorem is easily derived from the following identity: the probability that both A and B will occur is equal to P(A) multiplied by P(B|A), and also to P(B) multiplied by P(A|B).)

For example, consider a genetic disease that is known to afflict one person out of 100,000. It is possible to test for the genetic marker associated with the disease, but of course the test is not perfect: let’s assume that the false-positive rate is five percent (i.e., 5% of healthy people test positive) and the false-negative rate is one percent (i.e., 1% of diseased people test negative). Still, this seems like a pretty reliable test: it gives the correct result for 95% of healthy people and 99% of diseased people. So, if you took this test and the result was positive, you would probably be seriously worried, thinking that you most likely have the dreaded disease. This is where Bayes’ theorem can be a lifesaver.

Let’s use the theorem to calculate the probability of a person having the disease (A) given that his test result was positive (B). We assumed that P(A) = 1/100,000 (the prior probability of having the disease), and that P(B|A) = 99/100 (the probability of testing positive given that you have the disease). To compute P(B) — the prior probability of testing positive — we need to factor in both true positives and false positives; but since the prior probability of having the disease is so low, the combined value is only marginally higher than the false-positive probability of 5/100.

Putting it all together, we find that P(A|B) — the probability of a person having the disease given that his test result was positive — equals 99/100 multiplied by 1/100,000, divided by (slightly more than) 5/100. The result? Less than 1 in 5,000! In other words, even if your test result was positive, your chances of having the disease are still only about 0.02%. Your risk is twenty times greater than the general population, but it’s still highly likely that you are disease-free. This counter-intuitive result is due to the fact that the disease is quite rare to begin with, and consequently the vast majority of positive test results will be false positives.

In his essay “The Median Isn’t the Message,” Stephen Jay Gould describes the crucial role that scientific and mathematical knowledge came to play in his own personal life, while criticizing people’s “common distrust or contempt for statistics”:

Many people make an unfortunate and invalid separation between heart and mind, or feeling and intellect. In some contemporary traditions, abetted by attitudes stereotypically centered on Southern California, feelings are exalted as more “real” and the only proper basis for action — if it feels good, do it — while intellect gets short shrift as a hang-up of outmoded elitism. Statistics, in this absurd dichotomy, often become the symbol of the enemy. As Hilaire Belloc wrote, “Statistics are the triumph of the quantitative method, and the quantitative method is the victory of sterility and death.”

This is a personal story of statistics, properly interpreted, as profoundly nurturant and life-giving. It declares holy war on the downgrading of intellect by telling a small story about the utility of dry, academic knowledge about science.

After being diagnosed with mesothelioma, a rare and incurable cancer, Gould learned that his condition had a median mortality of only eight months — meaning that half of all people in his situation were dead within that time period. He was stunned for some minutes; but then “my mind started to work again, thank goodness.” As an evolutionary biologist, Gould knew that “variation itself is nature’s only irreducible essence,” while “means and medians are the abstractions.” What he had to do was place himself amidst the variation: figure out whether he was likely to belong to the fifty percent above the median. After a “furious and nervous” hour of research, he concluded that, based on his personal characteristics, his chances were in fact very good. Furthermore, the mortality distribution was “right skewed,” meaning that the lifespans of those who live longer than the eight month median stretch out over several years.

Attitude and mindset are known to matter when fighting disease. Gould credits his scientific training with giving him the knowledge he needed in order to correctly interpret the statistics, understand his situation, and avoid despair: “From years of experience with the small-scale evolution of Bahamian land snails treated quantitatively, I have developed this technical knowledge — and I am convinced that it played a major role in saving my life. Knowledge is indeed power…”

Stephen Jay Gould died in May, 2002 — twenty years after his mesothelioma diagnosis.

How to do cube roots in your head March 13, 2011

Posted by Ezra Resnick in Math.
3 comments

The next time you’re at a math-friendly party (or anywhere you don’t mind not being invited back to), ask one of the people nearby to secretly choose any two-digit number, cube it (he or she may use a calculator), and tell you the result. Watch your audience’s jaws drop and their eyes fill with admiration as you immediately name the original number! The cube root of 10,648? That’s 22, of course. How about 658,503? Why, it’s 87. I can go all night.

I was taught this trick by a friend of my uncle’s, the kind of person who always has a marked deck of cards and a fake thumb in his pocket. Five minutes of practice should be enough to master it — all you need to remember are the cubes of the numbers 0 through 9:

03 = 0
13 = 1
23 = 8
33 = 27
43 = 64
53 = 125
63 = 216
73 = 343
83 = 512
93 = 729

You can deduce the tens digit of your mystery root by knocking the last three digits off of the given cube, and seeing where the remainder falls among the values above. For instance, if you’re left with something smaller than 8 (i.e., the cube is less than 8,000), then the tens digit of your mystery root is 1. If you’re left with something greater than 7 but less than 27, then the tens digit you want is 2. And so on — if you’re left with 729 or more then the tens digit you’re looking for is 9.

What about the units digit? That’s even easier: it can be deduced directly from the units digit of the given cube. Notice that the units digits of the cube values above are all different — in most cases, it’s the same as the root number itself! The four exceptions pair up nicely: 2 and 8 map to each other, as do 3 and 7.

So to put it all together: say the person you want to impress gives you the cube value 175,616. Since 175 is between 125 and 216, the first digit you’re looking for is 5; and since the cube value ends with 6, so must its root — 56 it is.

I think my work here is done.

Mathematics could save your life December 6, 2010

Posted by Ezra Resnick in Math, Puzzles.
add a comment

Three prisoners are brought to jail, but the jail is full. So the warden suggests the following procedure: Each of the three prisoners will have a colored hat placed on his head, either white or black or red. (Each color may appear any number of times — they may all get red hats, or two could be white and one black, etc.) Each prisoner will be able to see the others’ hats, but not his own. Without communicating with each other in any way, they must each write down on a piece of paper what color they think their own hat is. If at least one of them guesses correctly, they will all go free; otherwise, they will all be executed. The warden gives the prisoners a few minutes to talk it over before the procedure begins. What should they do to ensure their survival?

I like this puzzle because at first it seems impossible: there is no way for a prisoner to deduce anything about the color of his own hat based on the hats of the others, so what strategy could possibly ensure that at least one of them will guess correctly? The key is to think like a mathematician, and look at the underlying structure of the scenario. Try to solve it on your own before reading on.

Let us assign a number to each of the colors: zero for white, one for black, two for red. Now, consider the sum of the three numbers corresponding to the colors of the prisoners’ hats. We don’t know what this sum is, but each of the prisoners knows two out of its three components. Next, think of the remainder we will get if we divide that sum by three: it must be either zero or one or two. For example, if there are two red hats and one white hat, then the sum is four, and the remainder when dividing by three is one. The idea is for each of the prisoners to be assigned one of the possible remainders, and each will guess the color which, when added to the other two colors he sees, yields a sum whose remainder matches the one he was assigned.

According to the strategy described, in this instance prisoner #0 would guess black, prisoner #1 would guess red, and prisoner #2 would guess black.

For instance, if prisoners zero and one are wearing red hats, then prisoner two will guess that his hat is black (bringing the sum to five and the remainder to two). He could be wrong, of course — but exactly one of the prisoners will always be right: the one who was assigned the remainder that matches the actual sum of the three colors. In the above case, if prisoner two’s hat is white then prisoner one will guess correctly, and if prisoner two’s hat is red then prisoner zero will guess correctly.

In case you were worried, this strategy can naturally be generalized for n prisoners and n hat colors.

Study math, get rich July 6, 2010

Posted by Ezra Resnick in Math, Puzzles.
add a comment

You receive a letter informing you that you have been selected as one of 32 lucky people around the world to participate in an experiment, and you may win a million dollars! The rules are simple: by midnight tonight, if exactly one of the 32 participants calls a certain phone number (it doesn’t matter who), you will each receive a million dollars. In any other event (more than one person calls the number, or no one calls), none of you will get anything. Assume the offer is genuine and all participants received the letter on the same day, but you have no way of tracking down the others. What should you do?

The solution is rather straightforward. In order to maximize the chances of exactly one participant making the phone call, each participant should decide to make the call with probability 1/32. For instance, you could roll a die with 32 faces (labeled 1 through 32) and make the phone call only if the number 1 comes up. Another option is to toss five coins and make the call only if they all come up heads (25=32). Problem solved. A more interesting question, however, is this: assuming that all the participants use the optimal strategy, what are the odds that they will actually win the money? And what would be the odds of winning if there were a thousand participants? Or a million?

What I like about problems like this is that the mathematics of the solution are quite simple, and yet the result is counter-intuitive, demonstrating (yet again) how bad human intuition about probability is. In this case, I would have guessed that the odds of winning are pretty small, and that they decrease significantly as the number of participants grows. With a million participants, the chances that all the dice rolls will work out so that exactly one participant makes the call seem pretty slim. Well – let’s think it through. Assume for the moment that there are only two participants, so they each toss a coin and make the phone call only if the result is heads. What are the odds of their winning the money? There are four (equally probable) outcomes: either they both get heads, or they both get tails, or the first participant gets heads and the second gets tails, or the first participant gets tails and the second gets heads. In the first of the four outcomes, they will both make the phone call, and in the second, neither of them will – in both cases, they lose. In either of the latter outcomes, however, exactly one of them makes the call and they win. So the odds of winning in this case are 50%. Not bad, but surely the odds decrease rapidly as we add participants?

Assume there are now three participants. If they all follow the strategy we described, there are 27 (33) possible outcomes, and in 12 of them exactly one participant makes the call, so the odds of winning are about 44%. In the general case, let n denote the number of participants, and assume they each use a die with faces labeled 1 through n. The chances of winning due to participant #1 being the only one to make the phone call are 1/n times ((n-1)/n)n-1 (participant #1 must roll a 1, and all the other n-1 participants must roll any of the n-1 values other than 1). The odds of winning due to participant #2 being the only one to make the phone call are the same as for participant #1, as they are for any other participant, so for the total probability of winning we must multiply the previous expression by n, giving the final answer: ((n-1)/n)n-1.

Plugging n=2 or n=3 into this formula gives the results mentioned above; plugging in n=32 gives a winning probability of about 37% – still pretty good! What about larger values of n? Those who know some calculus will recognize that the above formula converges to 1/e as n grows (where e is Euler’s number, namely 2.718…), so in fact, the odds of winning remain close to 37% no matter how large the number of participants. (Interestingly, the number 1/e pops up in many other, seemingly unrelated, probabilistic questions.)

So remember (especially when visiting a casino): don’t trust your intuitions when it comes to probability!