Unexpected Hanging Problem

Judge Wright has a reputation for always being correct. Standing before him is a condemned prisoner, which turns out to be a logician. Judge Wright decides to have some fun with him, and says, “You will be hanged at noon on one day of the coming week, and it will come as a surprise to you. You will not know until the executioner comes knocking on your cell door at 11:55am the day of the execution. It is 4pm Monday, so it will happen by noon next Monday at the latest.”

The prisoner carefully considers Judge Wright’s comments. He reasons that, next Monday, the last of 7 days, cannot be the day of execution, because being the last possible day of execution, what kind of surprise would that be? That rules next Monday out completely. How about Sunday? Well, Monday is out, so Sunday is now the last possible day of execution. But again, it wouldn’t come as a surprise either. So Sunday is also out. By similar reasoning, Saturday, Friday, Thursday, Wednesday, Tuesday are all logically ruled out. The only conclusion the prisoner could come to, is that Judge Wright made a rare mistake, and he will not be hanged at all.

At 11:55am Thursday, the executioner came knocking on the cell door. And sure enough, it came as a total surprise to the prisoner. Judge Wright had been right all along.


This is a rare example of a paradox that is also humorous. Although it does not initially seem worthy of serious discussion, surprisingly enough no fewer than 200 papers have been published on this paradox. Naturally, many of them start by dismissing other views and claiming that theirs is the long-awaited solution, the final nail in the coffin.

What exactly, is wrong with the prisoner’s reasoning? There are two main approaches to resolve the paradox, logical and epistemological. The logical approach breaks down the argument by examining the basis (axiom) used for the reasoning to:

The prisoner will be hanged next week and its date will not be deductible in advance by using this announcement as an axiom.

Which is a self-referential statement and cannot be used to construct a valid argument.

The epistemological argument focuses on the meaning of the announcement, specifically the “surprise” part. Rather than explaining it, I will use this brilliant variant of the paradox (by R.A. Sorensen):

Exactly one of five students, Art, Bob, Carl, Don, and Eric, is to be given an exam. The teacher lines them up alphabetically so that each student can see the backs of the students ahead of him in alphabetical order but not the students after him. The students are shown four silver stars and one gold star. Then one star is secretly put on the back of each student. The teacher announces that the gold star is on the back of the student who must take the exam, and that that student will be surprised in the sense that he will not know he has been designated until they break formation. The students argue that this is impossible; Eric cannot be designated because if he were he would see four silver stars and would know that he was designated. The rest of the argument proceeds in the familiar way.

I could not possibly come up with a better example. Not only does it highlight the subtle, different meanings of “surprise”, but more importantly the absurdity of the chained argument when “surprise” is defined properly. An elucidating example requires a deep understanding and effective communication. It cannot be faked.

Note: for a more detailed explanation and a comprehensive list of articles, I strongly suggest Timothy Chow’s paper

Zipper Around the World

I make zippers for a living. Every few months we produce enough zipper to go around the world, which is 40,075.16 kilometers at the equator. Here is an interesting thought experiment.

One day I’m bored. I haul out 40,075,160 meters of zipper (over 700 tons) from the warehouse, and make a perfect, tight wrap around the equator. With a smile on my face, I inspect my earth-hugging masterpiece. To my dismay, I find that the zipper is on the ground and on the surface of the ocean, getting dirty and wet, which is obviously unacceptable. Being the fickle person I am, I decide to haul in more zipper from the warehouse, and magically raise the entire zipper loop one meter above ground level (and sea level). I call on you, the warehouse manager, to get more zipper to do it.

Here is the question: Based on your intuition, roughly how much more zipper will I need?  How many trucks will you need?

 

To help out, here is a quick guide:

1%: 400 kilometers (1 big truck)

10%: 4,000 kilometers (four 40′ shipping containers)

100%: 40,000 kilometers (uh, call for quote)

 

You have 10 seconds to make a guess. Ready?

The answer, surprisingly, is 6.28 meters, or 0.000016%. The additional zipper needed is proportional to the diameter increase (1 meter off the ground = 2 meter diameter increase). Specifically, you will need 2m * pi = 6.28 meters. As the warehouse manager, all you need is to reach in your pocket and pull out some extra zipper, no need for a truck after all.

The Mirror Illusion

The brain is a wonderful organ. It provides us with a mostly accurate representation of the world around us, by making approximations based on sensory input and past experience. This has been mostly sufficient for our ancestors and their environment in which our brains have evolved. There are many examples of how the brain can be fooled, ranging from optical illusions, stereoscopy, to multi-sensory illusions such as virtual hands, phantom acupuncture, and the famous McGurk ba-ga experiment. What I will discuss here is something fundamentally different, which is how your brain can fool you by how it constructs the reality you perceive; and in particular, how difficult it can be to perceive it differently, even if you know the bias.

Look inside a mirror. You will see an image of yourself with a lateral but not vertical inversion, i.e., the left/right seems swapped but not up/down. A watch on your left hand shows up on the image’s right hand, but a shoe does not show up on the image’s head.

Think about it for a minute. That makes absolutely no sense, as the mirror is a piece of reflective glass, and should not discriminate between left/right and up/down. Most people have not thought through this apparent paradox. If you have not, I encourage you to take some time to think about it, and you will find that all of the obvious explanations that come to mind are in fact, incorrect. When first presented with the problem, I found myself experimenting with a mirror, closing one eye, in various orientations, and imagining different scenarios without gravity, to no avail.

What makes this problem so difficult? Simply put, we are looking in the wrong places. We look in the wrong places because what the brain constructs (and what we perceive) feels so real, we take for granted that it is real, and automatically exclude it from closer examination.

The first, and rather difficult step, is to realize that the mirror doesn’t care about direction.  You care about direction, and it is your brain that comes up with the representation, not the mirror. To illustrate this, point to the right, and the image will point in the same direction. Point up, same thing. However, point towards the mirror, and the images points back at you, in the opposite direction. The key observation is that the mirror inverts not in the left/right or up/down, but the front/back direction.

The second step, is to realize what you are actually seeing in the mirror. Imagine a cone pointed towards a wall. As you push the cone into the wall, imagine a cone growing on the other side of the wall, growing as you push, in the opposite direction. You end up with a cone, pointing towards you, on the other side of the wall. Put a red dot on the left side of the cone and a blue dot on the right side of the cone, and do the same thing. An inverted cone emerges on the other side of the wall, with a red dot and a blue dot, on the same corresponding side. Now, imagine a human face being pushed through the wall nose first, just like the cone, with the colored dots as eyes. You end up with an image of a face on the other side, inverted. The left eye is still the left eye, just flipped inside out. Although convoluted and highly counterintuitive, it is the correct interpretation of the image in the mirror.

Still having problems visualizing it? Take a latex glove and put it on your right hand. Now, take off the glove by inverting it, so it is inside out. The inverted right-hand glove is the analog of the image in the mirror, even though it looks like a left-hand glove.

The question now becomes, why do we so instinctively see a person swapped in the left/right direction, to the point where you cannot help but see it that way? The reason, simply put, is that it requires the least work from the brain. The correct interpretation (inverting), requires an incredible amount of work, as evidenced by the effort it takes simply to imagine it. There is no existing brain circuitry to do an inversion, because there was no need to do so when the brain evolved. It is far easier for the brain to represent the image as “someone” facing you rather than an inverted meaningless image. The agent detection circuitry in your brain is where the problem is, not the mirror.

Once the brain treats it as a “person”, it needs to orient the “person” in space to make sense of it. There are two main ways to mentally turn objects around in space; around a vertical axis (turning around), or around a horizontal axis (think foosball). Technically speaking, both ways are equally valid (as are any diagonal axes). Our brain will use the existing evolved circuitry, which is to turn around the vertical axis and spin the “person” around to face you, for that is what it encounters day-to-day. Interestingly enough, if one were to mentally flip around the horizontal axis, foosball-style, one would see the image as up/down inverted but not left/right inverted, further proving that the mirror does not discriminate, and the problem arises from a hardwired preference in your brain.

This example shows that something seemingly so real and right, is no more than an erroneous representation concocted by the brain. The explanation is counterintuitive, but readily verifiable, and probably enough to change your mind. Of course, this is a trivial question with a not so trivial answer, with no vested interest or emotional investment.

It makes me wonder though. I feel quite confident and passionate about many issues, far more complicated and nuanced than a piece of reflective glass. How many of those could I be completely wrong about, simply because it “feels” right? How many wrong trees could I be barking up, blissfully ignorant of the squirrel squarely perched on my back?

I wonder.

 

Interesting Problem

Here is an interesting problem that doesn’t require crazy math skills:

I participate in a daily lottery by choosing a number from 1-100 and hope it hits. The odds of winning is 1%.

I bought a ticket, got lucky, and won today. I decide to keep playing one ticket every day until I win again, and then stop playing for good. What day am I most likely to end up stop playing (by winning the lottery that day, of course)?

  1. Tomorrow
  2. The day after tomorrow
  3. 50 days from today
  4. 100 days from today
  5. There is no difference, every day is equally as likely

This is not a trick question (assume unlimited funds, standard lottery, etc.), no need to consider the unusual; you just need to understand the question.

If you chose 3) or 4), you would be incorrect. If you chose 5), congratulations, you have good statistical sense, and are probably quite sure of your answer, but you are nonetheless wrong – just less wrong. The correct answer is 1), tomorrow.

Counterintuitive? Here’s why.

Yes, every day is equally likely to hit the lottery as another, namely, a 1% chance. Let’s say today is Sunday. Tomorrow (Monday) I have a 1% chance to win and stop playing. The day after tomorrow (Tuesday) also carries a 1% chance to win. However for me to stop playing on Tuesday, two things have to happen: I must win on Tuesday (1%), and on top of that, I must NOT have won on Monday (99%). So while the chance I win on Tuesday is 1%, the chance that I stop on Tuesday is not 1% but 0.99%, because I must not have already won on Monday. Each day after that, the chance that I stop on that particular day decreases accordingly, not because I’m less likely to win on that day, but because I cannot have already won any day before then. Therefore, the most likely day that I will end up stop playing is tomorrow, which carries a 1% chance. Every day after that carries a chance of less than 1%.

This is a good example of how our intuitions fail us. I stated clearly that it is not a trick question, but “you just need to understand the question”, and for good reason. The question asked was “what day am I most likely to stop playing?”, which most people immediately substituted for a much easier question, “what day am I most likely to win?”. There is a subtle but important difference, which is the hard-to-spot implied condition of previous losses. To stop playing on a day does not just mean you win that day, but more importantly it implies an exact sequence of lose-lose-…-lose-WIN!

The last option “There is no difference, every day is equally as likely” is so appealing because it is a true statement.  The statement just happens to be irrelevant to the question.  It’s a powerful technique, a mental sleight of hand, widely used by marketers, politicians, and spouses.

Deception

This is a loose English version of my Facebook post.

This thought experiment is based on Daniel Dennett’s Library of Mendel (originally from Fechner), although he used it to illustrate something completely different.

Imagine a library that has all the possible books ever written. Suppose each book is 500 pages with 40 lines each, with 50 spaces for each line. Each page will then consist of 2000 characters per page (including spaces). Say there are 100 possible characters (including space and punctuation marks), which should cover upper and lower cases of English and European variations of the alphabet.

Somewhere in the library, there is a book consisting of nothing but blank pages, and another book consisting of nothing but obscenities. It is a large, but finite, library.

Within this library you can find every book ever published, and their translations in all languages, including long-lost ancient ones. If the book you are looking for is longer than 500 pages, it can be found in the library, properly split and numbered into different volumes.

Fascinatingly enough, here you can find your biography that is 100% accurate, not only for your past, but also perfectly predict everything in the future, to the day you die. In fact, you can find it written in regular English, ebonics, limericks, or with obscenities scattered throughout.

You can also find the correct value of pi (3.14159265358979…), up to infinite precision, volume after volume. You can find it spelled out as well, like three point one four one five nine two six five and so on. Paradoxically, pi itself is infinite, however you can find it in this large but finite library.

In this library, you can find anything you want to know about the universe, from Mozart to your innermost thoughts.

Everything I have written so far is technically true. It is also completely misleading and deceptive.

  1. Choice of words. The use of “library” and “books” primes you to think of them as what you commonly encounter. In fact, the vast majority of “books” contain nothing but gibberish. The chance of you finding a volume that contains English words is astronomically small. Among these volumes, the chance of you finding a volume that contains grammatically correct sentences is also astronomically small. Among these volumes, the chances of you finding a volume that makes sense is again, astronomically small. Among these volumes, the chances of you finding a volume that is correct, is again, astronomically small. This is very different from the concept of “book” or “library” that you are used to, where every volume is meaningful and deliberately written to convey a thought. An analogy would be me pointing to a bunch of numbers and proclaiming, “within these numbers you can find the winning combination of the next 100 lottos”. The difference being that the odds are better finding the lotto numbers.
  2. The example of pi is also completely misleading. You need to know pi to the precision you want in order to find the volumes, not the other way around. Yes, pi is infinite, and the volumes are technically finite, so how does that work? It works because sooner or later, you will reuse the volumes. Specifically, a volume will be reused when a 1,000,000 digit sequence repeats and aligns. Sounds crazy, but it is a mathematical certainty.
  3. Using “your biography” induces you to be emotionally invested. It uses your narcissism against yourself. After all, who doesn’t want to know their own future? The problem is, even though such a biography exists, you wouldn’t know which one is correct, even if you could find it.

To break away from this nonsense, we need to adjust the parameters and see what happens. In Dennett’s terms, it is “turn the knobs on the intuition pump”. What happens when we reduce the number of pages from 500 to just one page? Well, the library becomes much smaller, and you are simply retrieving pages instead of volumes. What happens when we reduce it further, to just one line of 50 spaces? What happens when we reduce it to just one character?

One character? That’s easy. It’s just the original 100 character set. Everything is simply built from this character set.

In fact, we can further reduce it to 0 and 1, if we encode into ASCII or Unicode.

This thought experiment shows how framing can mislead one into thinking a certain way, how cherry picking special cases can paint a rosy picture, how the brain is not equipped to deal with large numbers (scope insensitivity), how easy it is to see meaning in randomness, and how getting emotionally involved can cloud one’s judgment. Politicians use these dirty tricks, as do weight loss commercials.

Sharpening one’s thinking tools, along with some understanding of psychology, can come in super handy.  Especially when you need to deceive others effectively.

Hardest Logic Puzzle

I have come across “the hardest logic puzzle” and been fascinated with it and its variants. It stems from the classic Knights and Knaves puzzle:

There are two boxes to choose from, one of which you must open. One contains a treasure, and one contains a bomb which will lead to certain death. There are two people who both know the contents of the box, a Knight (who always tells the truth) and a Knave (who always lies). You do not know which is which. You can only ask one person one question, and must determine which box to open based on his answer.

The classic solution, is to point to the other guy and ask the question “would he say this box contains the treasure?” and open the other box if he says “yes”.

Using an embedded question, you can get a consistent and meaningful answer.

Let’s try a difficult version of the hardest logic puzzle.

There are three gods (A, B, C). One will always speak the truth (T), one will always lie (L), and one is completely random (R). Completely random does not mean that sometimes he answers truthfully and sometimes lies; it means the answer itself is random. They all understand English, however each god must reply in his own language, either “ja” or “da”, which means “yes” and “no”, in no particular order. The three gods each speak a different language, and unfortunately “ja” or “da” could mean “yes” or “no” differently in each language.

You may ask three yes/no questions to accurately determine the identities of each god. Each question must be placed to one god only at a time, and the same god may be asked multiple questions, consecutively or not, meaning that some god may not be asked any question at all. You may not ask questions that potentially cannot be answered (e.g., Truth would not be able to answer “would you say ‘ja’ if it means ‘no’ in your language?”)

The way the unanswerable question was phrased gives a hint to how the puzzle can be solved. For this very elegant solution, go to Wikipedia.

Intuition

Intuition, simply put, is a gut feeling.  It could be based on prior knowledge, pattern recognition, an unconscious reaction, even superstition.  It is useful in making quick decisions on the spot, say, when you are alone in the jungle and hear rustling in the bushes.  But in reality, it is a lousy basis for important decisions.

Let’s look at this example.

Imagine a fictional Foobar disease, which is always fatal, not common but not overly rare either, with an overall occurrence of 0.1%.  There is a test that is exceptionally sensitive (100%), which means that if you have the disease, this test will definitely identify it.  The test also has a very low false positive rate of 1% (99% specificity).

Out of curiosity, you take the test.  It turns out positive.  Ouch.

Quick!  Based on your gut feeling, what are the chances that you have this fatal Foobar disease?

95%? 90%?

No.

The correct answer is around 9%.  The approximate calculation is as follows (for exact calculations use Bayes’ theorem):

Out of 1000 people, only 1 will actually have the disease (0.1%).  The test, with a false positive rate of 1%, is expected to incorrectly identify 10 people as having the disease, along with the 1 person that actually has the disease.  Out of the 11 people identified as positive, only 1 will actually have it.

Counterintuitive, but true.

Now try telling that to the people that just tested positive for Foobar and blew their entire life savings at the casino.

When the US Preventative Services Task Force changed the guidelines for mammogram screenings, it was based on scientific evidence.  Same thing with prostate cancer screenings (PSA test).  The test intervals were lengthened (or eliminated) because there was no evidence that it actually provided actual benefit in the general (not high-risk) population.  The public immediately fired back, simply because it is highly counterintuitive: how on earth could someone oppose extra testing?  Conspiracy theories immediately surfaced and the issue soon became a political issue instead of a fact-based discussion.

It is unrealistic to expect everyone to look into and fully understand the underlying reasons, not because of intellectual laziness, but because those reasons often lie outside their realm of expertise.  Sadly enough, the most vocal opinions are usually shouted out by those that understand the least.  And although often treated otherwise by the media, volume does not equal correctness, understanding, controversy, much less consensus.  And as elitist as it may sound, I believe that knowledge is not a democracy, and public policy (especially on complex scientific issues) should be debated and guided by relevant experts, not by popular vote.

Scientists are generally the least confrontational and least vocal group, and politically have the least influence.  And let’s face it, the jargon-laden, carefully crafted, highly qualified statements that are spewed from their facial orifices don’t exactly appeal to voters.  So politically, are we doomed, in a Darwinian sense?  I’ll go out on a limb and say no, because although suboptimal, thankfully and ironically, ignorance is global.  Politicians everywhere are elected by popularity and not intelligence or expertise, and dictators do not rule because of oversized brains.  We are no worse off if everyone else is equally as bad.  At least that is my intuition.

* afterword: Putting the issue of limited resources and fairness aside, I am not opposed against extra testing, provided that the person fully understands the implications, risks, and what the test results actually mean, if anything.  I do oppose unnecessary testing, which I define as any test that will not change the course of action.  It makes no more sense to rearrange the deck furniture on the Titanic than it does to disinfect the death row inmate’s arm before giving him a lethal injection, or to order a Pap smear for a 90 year old.