##
Three philosophers
*March 22, 2014*

*Posted by Ezra Resnick in Logic, Philosophy, Puzzles.*

add a comment

add a comment

Three philosophers are sitting on a bench.

- The homophobic philosopher is a deontologist.
- The free will libertarian subscribes to the Copenhagen interpretation of quantum mechanics.
- The consequentialist is addicted to heroin.
- The philosopher sitting next to the antisemite is a virtue ethicist.
- The hard determinist is sitting next to the cocaine addict.
- The philosopher sitting in the middle subscribes to the many-worlds interpretation of quantum mechanics.
- The sexist philosopher is a free will compatibilist.
- The philosopher subscribing to Bohm’s interpretation of quantum mechanics is not the methamphetamine addict.
- The antisemitic philosopher is not sexist, the sexist philosopher is not homophobic, and the homophobic philosopher is not antisemitic.
- No philosopher is addicted to more than one substance.

What is the antisemitic philosopher’s position on free will?

##
No servants were harmed in the making of this puzzle
*March 24, 2012*

*Posted by Ezra Resnick in Computer science, Puzzles.*

add a comment

add a comment

Twelve hours before the Emperor’s great banquet is to begin, the Master of Spies comes forward with distressing news: there is good reason to suspect that (exactly) one of the 1024 barrels of wine in the imperial wine cellar has been poisoned. The poison believed to have been used is odorless and otherwise undetectable, but so deadly that drinking even a small amount is certain to cause death within twelve hours.

The Emperor is determined to identify the poisoned barrel before the banquet begins — but how? It occurs to him that he could have 1024 of his servants each take a sip from a different barrel, and see which one dies. However, there is much work to be done preparing for the banquet, so it would be preferable to minimize the number of drunk servants. How can the poisoned barrel be positively identified while utilizing the smallest possible number of servants? (Note that the Emperor doesn’t care how many servants end up dying, he just wants to minimize the number of servants drinking wine.)

Try to solve it yourself before reading on.

##
But does he know that I know that he knows?
*December 25, 2010*

*Posted by Ezra Resnick in Game theory, Puzzles.*

add a comment

add a comment

Five children have been playing together, and three of them have gotten mud on their foreheads. Each child can see mud on others but not on himself. When the children come home, their father says that at least one of them has mud on his forehead; he then asks if anyone can deduce whether there is mud on his own forehead. The children look around, but no one answers. So the father asks again: Does anyone know whether he has mud on his own forehead? Silence. The father then repeats his question a third time, at which point all three dirty children immediately step forward and proclaim that their foreheads must be muddy.

The first (and simplest) puzzle is: How did the kids know? And why did the father have to ask three times?

The solution is inductive: begin by considering what would happen if there were only one muddy child, Alex. He would see that all the others are clean, so when the father states that at least one child is dirty, Alex would immediately know it must be him. Therefore, if there were only one dirty child, he would come forward after the father’s first query. Next, suppose there are *two *muddy children, Alex and Bob. Alex sees that Bob is dirty, and vice versa. As far as Alex knows, Bob could be the only dirty one, and as far as Bob knows, Alex could be the only dirty one; so neither of them step forward after the father’s first query (each expecting the other to do so). However, when Alex sees that Bob did not immediately come forward, and since we already concluded that if there were only one muddy child he would have identified himself right away, Alex can deduce that his own forehead must be muddy as well. Bob reasons likewise, and so they both step forward when the father asks a second time. Applying the same logic to our original scenario shows that the three dirty children can identify themselves as soon as they see that no one came forward after the second query — so in this case, the third time is the charm.

A more interesting puzzle, however, is this: Was the father’s opening statement that “at least one child has mud on his forehead” necessary? Reexamining the solution above, we find that the “base case” in which there is only one muddy child doesn’t work without the father’s statement — the dirty child would never come forward, since as far as he knows, no one is muddy. Consequently, the entire chain of inference collapses. So the father’s statement is necessary — but this seems paradoxical, since he merely told the children something they already know! After all, if there are three muddy foreheads, each of the five children knows just by looking at the others that “at least one child has mud on his forehead.” So what information does the father’s statement actually provide?

Think again about the case where only two children have mud on their foreheads, Alex and Bob, and let *p* represent the statement “at least one child has mud on his forehead.” It is then true that each of the five children knows *p* even without the father saying so, but the problem is that not everyone knows that *everyone else *knows *p*: as far as Alex knows, Bob could be the only muddy one, in which case Bob would *not *know that “at least one child is muddy” (since Bob would see only clean faces). Likewise, as far as Bob knows, Alex might not know *p*. This is what prevents them from deducing their own situation. What about our original case? When there are *three *dirty children, not only does everyone already know *p*, everyone knows that everyone knows *p*. However, not everyone knows that everyone knows that everyone knows *p*! What the father provides, then, is *common knowledge*: after his statement, they all know that there is at least one muddy child, and they all know that they all know it, and they all know that they all know that they all know it…

A closely related scenario is the *coordinated attack* problem. Suppose that two generals, each in command of an army division, wish to launch a surprise attack on the enemy. However, the only way for the attack to succeed is if both divisions attack simultaneously; if a single division attacks alone it will be defeated. Unfortunately, the only way for the generals to communicate is by messenger, and the messengers can be lost or delayed along the way. What should the generals do? Suppose that general A sends general B the message: “Attack at dawn on January 1st.” This is certainly not enough to guarantee a coordinated attack, since general A cannot be sure that his message was received. So let’s say that general A waits until he receives a confirmation message back from general B. Can both generals then attack with confidence? No, since general B doesn’t know whether the confirmation he sent was received by general A — and general B knows that general A will not attack without receiving that confirmation. (And general A knows that general B knows this.) So even though both generals know when to attack, and they each know that the other knows, they cannot attack because neither general knows that the other knows that he knows!

This situation is familiar to anyone trying to make an appointment with someone by email. How many rounds of confirmation are necessary to be sure you both know the engagement is on? It’s easy to see that no number of acknowledgments and counter-acknowledgments will allow the parties to achieve absolute certainty. As explained in a paper by Ronald Fagin, Joseph Halpern, Yoram Moses and Moshe Vardi, guaranteeing coordinated attack requires common knowledge, and common knowledge cannot be achieved where communication is not guaranteed. Actually, the situation is worse than that: even in a system where communication *is* guaranteed, if there is any uncertainty about the delivery time — e.g., even if all messages are guaranteed to arrive in one millisecond *or less* — common knowledge is impossible to achieve.

Fagin *et al.* go on to explore various ways out of this apparent stalemate. But if you ever miss a date, you can always claim that you didn’t show up because the other party didn’t acknowledge your acknowledgment of their acknowledgment…

##
Mathematics could save your life
*December 6, 2010*

*Posted by Ezra Resnick in Math, Puzzles.*

add a comment

add a comment

Three prisoners are brought to jail, but the jail is full. So the warden suggests the following procedure: Each of the three prisoners will have a colored hat placed on his head, either white or black or red. (Each color may appear any number of times — they may all get red hats, or two could be white and one black, etc.) Each prisoner will be able to see the others’ hats, but not his own. Without communicating with each other in any way, they must each write down on a piece of paper what color they think their own hat is. If at least one of them guesses correctly, they will all go free; otherwise, they will all be executed. The warden gives the prisoners a few minutes to talk it over before the procedure begins. What should they do to ensure their survival?

I like this puzzle because at first it seems impossible: there is no way for a prisoner to deduce anything about the color of his own hat based on the hats of the others, so what strategy could possibly ensure that at least one of them will guess correctly? The key is to think like a mathematician, and look at the underlying structure of the scenario. Try to solve it on your own before reading on.

Let us assign a number to each of the colors: zero for white, one for black, two for red. Now, consider the sum of the three numbers corresponding to the colors of the prisoners’ hats. We don’t know what this sum is, but each of the prisoners knows two out of its three components. Next, think of the remainder we will get if we divide that sum by three: it must be either zero or one or two. For example, if there are two red hats and one white hat, then the sum is four, and the remainder when dividing by three is one. The idea is for each of the prisoners to be assigned one of the possible remainders, and each will guess the color which, when added to the other two colors he sees, yields a sum whose remainder matches the one he was assigned.

For instance, if prisoners zero and one are wearing red hats, then prisoner two will guess that his hat is black (bringing the sum to five and the remainder to two). He could be wrong, of course — but exactly one of the prisoners will always be right: the one who was assigned the remainder that matches the actual sum of the three colors. In the above case, if prisoner two’s hat is white then prisoner one will guess correctly, and if prisoner two’s hat is red then prisoner zero will guess correctly.

In case you were worried, this strategy can naturally be generalized for *n *prisoners and *n *hat colors.

##
Cognitive misers
*November 7, 2010*

*Posted by Ezra Resnick in Logic, Puzzles, Reason.*

add a comment

add a comment

Jack is looking at Anne, while Anne is looking at George. Jack is married, while George is not. Is there a married person looking at an unmarried person?

(a) Yes; (b) No; (c) It cannot be determined.

Make your choice before reading on.

According to psychologist Keith Stanovich (in an article by Kurt Kleiner), more than 80 percent of people answer this question incorrectly. It seems as though we need to know whether Anne is married; but a bit of further thought shows that it doesn’t matter. If she’s unmarried, then a married person (Jack) is looking at an unmarried person (Anne). And if she *is *married, then a married person (Anne) is looking at an unmarried person (George). So the answer is Yes in any case.

The solution is obvious once we think it through, so why did we get it wrong to begin with? According to Stanovich,

We are all “cognitive misers” who try to avoid thinking too much. This makes sense from an evolutionary point of view. Thinking is time-consuming, resource intensive and sometimes counterproductive. If the problem at hand is avoiding the charging sabre-toothed tiger, you don’t want to spend more than a split second deciding whether to jump into the river or climb a tree.

So we’ve developed a whole set of heuristics and biases to limit the amount of brainpower we bear on a problem. These techniques provide rough and ready answers that are right a lot of the time — but not always.

For the vast majority of decisions we face nowadays, it would clearly be better to spend a few more minutes thinking in order to avoid incorrect conclusions. To that end, it’s important to remember how our intuition can fail us, even in easy cases.

##
Is predicting human behavior inherently paradoxical?
*August 7, 2010*

*Posted by Ezra Resnick in Computer science, Game theory, Puzzles.*

add a comment

add a comment

Suppose that someone has a method for predicting how people will behave in certain controlled situations. We need not assume perfect clairvoyance; let’s say that past performance has shown the method to be 90% accurate. The predictor invites you into a room where you are presented with two closed boxes, marked A and B. The rules are as follows: you may either take both boxes, or take box B only. Box A definitely contains $1000; the contents of box B, however, depend on the prediction that was made (in advance) by the predictor. If he predicted that you would take both boxes, box B was left empty. If, however, he predicted that you would take only box B, $1 million were placed inside it.

This is called *Newcomb’s paradox*, created by the theoretical physicist William Newcomb in 1960. Why is it a paradox? At first glance, it seems obvious that you should take both boxes. Regardless of the contents of box B, taking both boxes always gives you an extra $1000! This is the principle of *dominance *in game theory, and it is difficult to dispute. However, an equally well-founded principle is that of *expected utility:* when you know the probabilities of the various outcomes, you should attempt to maximize your expected winnings. If you take both boxes, there is a 90% chance you will end up with $1000 (the predictor was right and so box B is empty), and a 10% chance of ending up with $1,001,000 (the predictor was wrong). So the expected utility when taking both boxes is **$101,000**. On the other hand, if you take box B only, there is a 90% chance of getting $1 million, and a 10% chance of getting nothing, so the expected utility is **$900,000**! It seems you should take only box B, then. But isn’t it irrational to leave behind the guaranteed $1000 in box A? And round and round we go…

As is often the case, the root of the paradox is *self-reference*. (Is the sentence “this sentence is false” true or false?) In Newcomb’s paradox, the subject knows his actions have been predicted, and this knowledge influences his decision. Therefore, the predictor must take his own prediction into account when making his prediction. In other words, if the predictor wishes to create a computer model which will simulate the subject’s decision-making process, the model must include a model of the computer itself — it must predict its own prediction, implying an infinite regress which cannot be achieved by any finite computer. This inherent limitation is reminiscent of Rice’s theorem in computability theory: it is impossible to write a computer program that can determine whether any given computer program has some nontrivial property (such as the well-known undecidable “halting problem”).

It is tempting to dismiss Newcomb’s paradox as logically impossible, under the assumption that we can never predict with any accuracy the behavior of a person who knows that his actions have been predicted. (Asimov’s *Foundation *introduces “psychohistory” as a branch of mathematics which can predict the large-scale actions of human conglomerates, but only under the assumption that “the human conglomerate be itself unaware of psychohistorical analysis.”) Nevertheless, William Poundstone (in *Labyrinths of Reason*) describes a way of conducting Newcomb’s experiment that doesn’t require a computer model to model itself. It does, however, require a matter scanner. We would use the scanner to create an exact copy of the entire setting, atom by atom — two rooms, two sets of boxes, two subjects. (Neither subject knows whether he is the “real” subject or the clone.) We would then use the actual decision made by the first subject as our prediction for the behavior of the second subject.

So what would you choose? Box B, or both boxes?

##
Study math, get rich
*July 6, 2010*

*Posted by Ezra Resnick in Math, Puzzles.*

add a comment

add a comment

You receive a letter informing you that you have been selected as one of 32 lucky people around the world to participate in an experiment, and you may win a million dollars! The rules are simple: by midnight tonight, if *exactly one* of the 32 participants calls a certain phone number (it doesn’t matter who), you will each receive a million dollars. In any other event (more than one person calls the number, or no one calls), none of you will get anything. Assume the offer is genuine and all participants received the letter on the same day, but you have no way of tracking down the others. What should you do?

The solution is rather straightforward. In order to maximize the chances of exactly one participant making the phone call, each participant should decide to make the call with probability 1/32. For instance, you could roll a die with 32 faces (labeled 1 through 32) and make the phone call only if the number 1 comes up. Another option is to toss five coins and make the call only if they all come up heads (2^{5}=32). Problem solved. A more interesting question, however, is this: assuming that all the participants use the optimal strategy, what are the odds that they will actually win the money? And what would be the odds of winning if there were a thousand participants? Or a million?

What I like about problems like this is that the mathematics of the solution are quite simple, and yet the result is counter-intuitive, demonstrating (yet again) how bad human intuition about probability is. In this case, I would have guessed that the odds of winning are pretty small, and that they decrease significantly as the number of participants grows. With a million participants, the chances that all the dice rolls will work out so that exactly one participant makes the call seem pretty slim. Well – let’s think it through. Assume for the moment that there are only two participants, so they each toss a coin and make the phone call only if the result is heads. What are the odds of their winning the money? There are four (equally probable) outcomes: either they both get heads, or they both get tails, or the first participant gets heads and the second gets tails, or the first participant gets tails and the second gets heads. In the first of the four outcomes, they will both make the phone call, and in the second, neither of them will – in both cases, they lose. In either of the latter outcomes, however, exactly one of them makes the call and they win. So the odds of winning in this case are 50%. Not bad, but surely the odds decrease rapidly as we add participants?

Assume there are now three participants. If they all follow the strategy we described, there are 27 (3^{3}) possible outcomes, and in 12 of them exactly one participant makes the call, so the odds of winning are about 44%. In the general case, let *n* denote the number of participants, and assume they each use a die with faces labeled 1 through n. The chances of winning due to participant #1 being the only one to make the phone call are 1/n times ((n-1)/n)^{n-1} (participant #1 must roll a 1, and all the other n-1 participants must roll any of the n-1 values other than 1). The odds of winning due to participant #2 being the only one to make the phone call are the same as for participant #1, as they are for any other participant, so for the total probability of winning we must multiply the previous expression by n, giving the final answer: **((n-1)/n) ^{n-1}**.

Plugging n=2 or n=3 into this formula gives the results mentioned above; plugging in n=32 gives a winning probability of about 37% – still pretty good! What about larger values of n? Those who know some calculus will recognize that the above formula converges to 1/*e* as n grows (where *e* is Euler’s number, namely 2.718…), so in fact, the odds of winning remain close to 37% *no matter how large the number of participants*. (Interestingly, the number 1/*e* pops up in many other, seemingly unrelated, probabilistic questions.)

So remember (especially when visiting a casino): don’t trust your intuitions when it comes to probability!