jump to navigation

Turing’s library June 19, 2016

Posted by Ezra Resnick in Computer science, Logic.
add a comment

You are in an infinite library, containing the code of every possible computer program. In the library’s central space there is a computer that can be used for running the programs: you enter the code of the program you wish to run, along with the program’s input, then you push the Run button. A red light indicates that the program is running. Some output may be printed, and when the program’s execution is complete the light goes off. For some programs and inputs the run ends almost immediately, but for others the light stays on for a long time. If you get tired of waiting for the computer to halt, you can always press the Abort button to stop the current run.

Some programs in the library are short and simple, like this one:

Program M [input: two numbers, a and b]
Print the sum of a and b.

Unsurprisingly, running Program M with the numbers 2 and 3 as input prints “5”, denoted M[2, 3] = 5. Some programs are much more complex, of course. There is a program that translates English text to Japanese; a program that computes the shortest route between any two cities on a map; a program that, given the state of the board in a checkers game, finds the optimal next move.

Some programs take a long time to run, but you notice that some programs are guaranteed to never halt, at least for some inputs. For example:

Program N [input: a number, n]
As long as n equals 0, keep printing “0”.

For any nonzero input, Program N halts immediately; but N[0] gets stuck in an infinite loop, printing zeros forever (or until you hit the Abort button).

Unfortunately, you find that for many programs you cannot easily tell just by examining their code whether or not they will run forever. If a program is still running after a week, or a year, should you give up and abort? Perhaps it only needs a minute more, or an hour, or a day — or, it might be stuck in an infinite loop that will never end. It occurs to you that it would be nice to know in advance, before running a program, whether it will eventually halt (for a given input), or whether it is destined to run forever. Surely, you think, there must be a program somewhere in the library that can answer that question! After all, a computer program is just text, so there’s no reason why the code of one program cannot serve as the input for another. The special program you are seeking, call it Program H, would take as its input the code of the program to be evaluated (p), along with that program’s proposed input (i); Program H would then print “True” (i.e., H[p, i] = True) in cases where p[i] would halt, or “False” (i.e., H[p, i] = False) in cases where p[i] would run forever. You spend a long time searching the library for Program H.

One day you happen upon someone who introduces himself as the librarian. Excited, you describe Program H, and ask where in the library it can be found. The librarian smiles. The library is infinite, he says, containing every possible computer program — but none of them is Program H. You ask how he can be certain, since by his own admission the library is infinite. Let me prove it to you, he says. If Program H existed in the library, there must also exist Program G, defined as follows:

Program G [input: the code of a computer program, x]
Use the same method as Program H to determine whether program x will halt when running with the code of program x as its input. If so (i.e., H[x, x] = True), then loop forever; otherwise (i.e., H[x, x] = False), halt.

Now, says the librarian, ask yourself: What will happen if we run Program G with its own code as input? Will G[G] run forever or halt? We ought to be able to compute the answer using Program H. Let’s first assume that H[G, G] = True, which means that G[G] will halt. But, based on the definition of Program G, G[G] would only halt if Program H reports that G[G] will not halt, i.e., H[G, G] = False — contradicting our assumption. Let’s assume, then, that H[G, G] = False, which means that G[G] will not halt. But, based on the definition of Program G, G[G] would only fail to halt if Program H reports that G[G] will halt, i.e., H[G, G] = True — also a contradiction!

We therefore have no choice but to conclude that Program H is like a square circle: it cannot logically exist. The library is infinite, containing every possible computer program — but none of them is Program H: some problems are impossible to solve.

Look on the bright side, says the librarian, sensing your disappointment. What’s that, you ask. The librarian smiles: At least you won’t waste any more time searching for something that isn’t there.


How to solve a hard problem December 25, 2015

Posted by Ezra Resnick in Computer science, Humor.
1 comment so far

To solve a hard problem, first break it down into pieces. Then pack those pieces into bins, using as few bins as possible. You’re going to want a little help from your friends, so consult your social network and find the largest clique of people who all know each other. Visit the home of each of those friends (making sure to use the shortest possible overall route), and give each friend a subset of the bins whose overall number of pieces equals that friend’s age. Return home, and wait for your friends to send you their results. (While you’re waiting, you can perfect your game of Candy Crush.) Then find the longest sub-sequence common to all your friends’ results — that sub-sequence is (almost surely) your solution!

Note: If the above procedure is taking too long to terminate, try breaking your problem into more pieces; making more friends; or consulting an oracle.


Software engineering principles exemplified with cooking recipes August 15, 2015

Posted by Ezra Resnick in Computer science.
1 comment so far

Don’t Repeat Yourself


Add an inch or two of water to a pot. Insert a colander above the water, and bring the water to a boil. Add broccoli in bite-sized pieces. Cover the pot and cook for a few minutes, until tender.

Add an inch or two of water to a pot. Insert a colander above the water, and bring the water to a boil. Add cauliflower in bite-sized pieces. Cover the pot and cook for a few minutes, until tender.


Steam broccoli and cauliflower. (See sidebar on how to steam vegetables.)

Modularity (Low Coupling)


Push the “Start” button on the left side of the oven, then push the “plus” button until the temperature display reads 350. Wait 15 minutes. Put cookies in the oven for 27.5 minutes.


Preheat oven to 350 degrees (see oven’s instruction manual). Bake cookies for 20-30 minutes, until firm and brown.



Milk a cow, and let the fresh milk rest in a cool place for 24 hours. Skim the layer of cream off the surface and pour into a container. Shake the container for 30 minutes. Filter through a gauze to eliminate the liquid. Put into a mold and chill.


Buy some butter at the store.


A gift that keeps on giving December 21, 2014

Posted by Ezra Resnick in Computer science.
add a comment
  • Prepare 5 blank papers, and number them (#1 to #5).
  • On paper #1, write: “Prepare 5 blank papers, and number them (#1 to #5).
  • On paper #2, write: “Start a fresh page, and copy the contents of paper #1 onto it.
  • On paper #3, write: “Then, for each of the numbers 1 through 5, write on the fresh page:”, followed by quotation marks, followed by “On paper #X, write:”, followed by quotation marks, followed by “, followed by quotation marks, followed by the contents of paper #X, followed by quotation marks (where X is replaced first by 1, then by 2, etc. — five lines in total).
  • On paper #4, write: “Then copy the contents of papers #2, #3, #4, and #5 onto the fresh page.
  • On paper #5, write: “Give the page to a friend.
  • Start a fresh page, and copy the contents of paper #1 onto it.
  • Then, for each of the numbers 1 through 5, write on the fresh page: “On paper #X, write:”, followed by quotation marks, followed by the contents of paper #X, followed by quotation marks (where X is replaced first by 1, then by 2, etc. — five lines in total).
  • Then copy the contents of papers #2, #3, #4, and #5 onto the fresh page.
  • Give the page to a friend.

A computer scientist plays Twenty Questions June 12, 2012

Posted by Ezra Resnick in Computer science, Game theory.

“Would you like to play Twenty Questions?”

“Sure, how do we play?”

“I think of a famous person — alive or dead, real or fictional — and you have to guess who it is, in no more than twenty yes-or-no questions.”

“Well… I could use binary search to identify a single letter of the alphabet in 4.7 yes-or-no questions (on average), so nineteen questions should allow me to identify four letters. Why don’t you just tell me the first four letters of your person’s last name, and I’ll guess who it is.”

“Actually, I don’t think I want to play any more.”

“That’s OK — the game is flawed, anyway: it assumes there are no more than 1,048,576 famous people to choose from…”

No servants were harmed in the making of this puzzle March 24, 2012

Posted by Ezra Resnick in Computer science, Puzzles.
add a comment

Twelve hours before the Emperor’s great banquet is to begin, the Master of Spies comes forward with distressing news: there is good reason to suspect that (exactly) one of the 1024 barrels of wine in the imperial wine cellar has been poisoned. The poison believed to have been used is odorless and otherwise undetectable, but so deadly that drinking even a small amount is certain to cause death within twelve hours.

The Emperor is determined to identify the poisoned barrel before the banquet begins — but how? It occurs to him that he could have 1024 of his servants each take a sip from a different barrel, and see which one dies. However, there is much work to be done preparing for the banquet, so it would be preferable to minimize the number of drunk servants. How can the poisoned barrel be positively identified while utilizing the smallest possible number of servants? (Note that the Emperor doesn’t care how many servants end up dying, he just wants to minimize the number of servants drinking wine.)

Try to solve it yourself before reading on.


Some amusing (and meaningless) statistics October 8, 2011

Posted by Ezra Resnick in Computer science, Math.
add a comment

Google’s Ngram Viewer allows you to explore trends in the occurrence of chosen phrases in books over time. For instance, this graph shows the frequencies of the words “religion” (blue), “faith” (red), and “science” (green) in English books during the last two centuries:

This awesome technology can be a great research tool, of course; but allow me to amuse myself instead…

This is not a graph of the quadratic and exponential functions.

This is not a graph of the sine function.

This graph does not demonstrate Moore's Law.

Those mental processes that we don’t understand June 23, 2011

Posted by Ezra Resnick in Computer science.
add a comment

turingToday is the 99th anniversary of the birth of Alan Turing, father of computer science and artificial intelligence. Turing also led the British code-breaking effort during World War II which succeeded in deciphering German communications, contributing greatly to the allied victory.

In a 1951 lecture entitled “Intelligent Machinery, A Heretical Theory,” Turing  argued that “machines can be constructed which will simulate the behaviour of the human mind very closely.” At the end of the lecture, Turing speculated about the day when building such computers will be practical:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. To do so would of course meet with great opposition, unless we have advanced greatly in religious toleration from the days of Galileo. There would be great opposition from the intellectuals who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do, trying to understand what the machines were trying to say, i.e. in trying to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control…

But there are those who object in principle to the idea that a machine could ever be truly intelligent, could ever be said to think. In a January 1952 BBC broadcast, Turing explained why trying to define “thinking” in this context is pointless and unnecessary:

I don’t really see that we need to agree on a definition [of thinking] at all. The important thing is to try to draw a line between the properties of a brain, or of a man, that we want to discuss, and those that we don’t. To take an extreme case, we are not interested in the fact that the brain has the consistency of cold porridge. We don’t want to say “This machine’s quite hard, so it isn’t a brain, and so it can’t think.”

Asking whether a computer could ever really think is like asking whether a jet plane can really fly: if you include “flapping wings” in your definition of flying, then the answer is no — a jet plane doesn’t fly using the same methods as a bird — but so what? What matters are competences. If a computer could perform all the tasks we consider as requiring intelligence, then the fact that its internal mechanisms are different from a human’s should be insignificant.

But could a computer ever mimic the most complex aspects of human intelligence, like natural language? When Turing suggested that a computer might be able to learn using methods analogous to those used in the brain, he was asked: “But could a machine really do this? How would it do it?” Turing replied:

I’ve certainly left a great deal to the imagination. If I had given a longer explanation I might have made it seem more certain that what I was describing was feasible, but you would probably feel rather uneasy about it all, and you’d probably exclaim impatiently, “Well, yes. I see that a machine could do all that, but I wouldn’t call it thinking.” As soon as one can see the cause and effect working themselves out in the brain, one regards it as not being thinking, but a sort of unimaginative donkey-work. From this point of view one might be tempted to define thinking as consisting of “those mental processes that we don’t understand.” If this is right then to make a thinking machine is to make one which does interesting things without our really understanding quite how it is done.

It used to be claimed that a machine would never beat a human grandmaster at chess, since good chess requires “insight.” As soon as computers became better at chess than the best human players, the goalposts were immediately moved: Deep Blue wasn’t really thinking — it’s merely a very fast machine with lots of memory following a program. (So apparently, playing good chess doesn’t require intelligence after all.) We are assured, however, that a computer will never write a novel — that requires genuine creativity (which only humans have). And when a computer does write a novel…

In the spring of 1952, Turing was tried and convicted for having homosexual relations, then illegal in Britain. To avoid going to prison, he had to agree to chemical castration via female hormone injections. His security clearance was revoked, barring him from continuing his cryptographic consultancy for the British government.

Alan Turing committed suicide on June 7, 1954, several weeks before his 42nd birthday.

Learn recursion in X steps May 21, 2011

Posted by Ezra Resnick in Computer science.
  1. If X equals 0, congratulations! You understand recursion.
  2. Otherwise, all you have to do is learn recursion in (X-1) steps.

Competence without comprehension March 4, 2011

Posted by Ezra Resnick in Computer science, Evolution.
1 comment so far

Gaudi vs. termites

On the left, we have Gaudi’s cathedral in Barcelona; on the right, a termite mound. Both structures serve a purpose (or several purposes); and both exist, with their particular characteristics, for a reason. They are not the result of materials being thrown together randomly; it makes sense for us to ask why their features were built one way and not another. And yet, there is a crucial difference between the two.

In a recent lecture at UCLA, Daniel Dennett describes the difference this way: There is a reason why termites build mounds — but it’s not true that termites have a reason for building mounds. Human beings have reasons for the things they do, and they can represent those reasons explicitly. But no termite needs to understand the reasons behind its actions — no one needs to understand them. Complex reasons can emerge from the mindless, purposeless, automatic process of natural selection.

This idea, of course, is extremely counter-intuitive. Dennett quotes one early attack on Darwin, published anonymously in 1868:

In the theory with which we have to deal, Absolute Ignorance is the artificer; so that we may enunciate as the fundamental principle of the whole system, that, in order to make a perfect and beautiful machine, it is not requisite to know how to make it. This proposition will be found, on careful examination, to express, in condensed form, the essential purport of the Theory, and to express in a few words all Mr. Darwin’s meaning; who, by a strange inversion of reasoning, seems to think Absolute Ignorance fully qualified to take the place of Absolute Wisdom in all the achievements of creative skill.

Exactly! This “strange inversion of reasoning” was Darwin’s great insight: a new way of thinking, with profound consequences and explanatory power.

Dennett attributes a comparable “inversion of reasoning” to Alan Turing. Before modern computers, “computers” were humans who performed mathematical calculations manually. To do this, they had to understand arithmetic. But Turing realized that it’s not necessary for a computing machine to know what arithmetic is. And so we now have CPUs, spreadsheets, search engines, all performing complex tasks without understanding what they are doing: competence without comprehension.

This is the opposite of our own personal experience: our competences flow from our comprehension. But evolution shows us that comprehension can emerge as the result, not the cause, of competence. Just as life is ultimately constructed out of non-living parts, understanding can be constructed out of non-understanding parts. The individual neurons in our brain don’t understand anything — but we do.

There must be a continuum, therefore, ranging from a complete lack of understanding to the kind of understanding humans have. Do apes have reasons? Apes fall somewhere in the middle between termites and Gaudi. They have proto-reasons. The same might be said of our more complex computing machines. One day, we will reach the point when computers have full-fledged reasons of their own.

For billions of years on this planet, there was competence but no comprehension. There were reasons, but no one understood them. We have now evolved the ability to look back and see the reasons everywhere in the tree of life — reasons discovered by the same mindless process that produced us.