Thought Chemistry

commons.wikimedia.org/wiki/File:Polvertoren_van_Rijnberk_vliegt_de_lucht_in_-_Explosion_Pulverturm_Rheinberg_(1698).jpg

How many ideas hover dispersed in my head of which many a pair, if they should come together, could bring about the greatest of discoveries! But they lie as far apart as Goslar sulphur from East India saltpeter, and both from the dust in the charcoal piles on the Eichsfeld — which three together would make gunpowder. How long the ingredients of gunpowder existed before gunpowder did! There is no natural aqua regia. If, when thinking, we yield too freely to the natural combinations of the forms of understanding and of reason, then our concepts often stick so much to others that they can’t unite with those to which they really belong. If only there were something in that realm like a solution in chemistry, where the individual parts float about, lightly suspended, and thus can follow any current. But since this isn’t possible, we must deliberately bring things into contact with each other. We must experiment with ideas.

— G.C. Lichtenberg, Aphorisms

Almost Home

A drunk man arrives at his doorstep and tries to unlock his door. There are 10 keys on his key ring, one of which will fit the lock. Being drunk, he doesn’t approach the problem systematically; if a given key fails to work, he returns it to the ring and then draws again from all 10 possibilities. He tries this over and over until he gets the door open. Which try is most likely to open the door?

Surprisingly, the first try is most likely. The probability of choosing the right key on the first try is 1/10. Succeeding in exactly two trials requires being wrong on the first trial and right on the second, which is less likely: 9/10 × 1/10. And succeeding in exactly three trials is even less likely, for the same reason. The probability diminishes with each trial.

“In other words, it is most likely that he will get the right key at the very first attempt, even if he is drunk,” writes Mark Chang in Paradoxology of Scientific Inference. “What a surprise!”

A Losing Game

You and I each have a stack of coins. We agree to compare the coins atop our stacks and assign a reward according to the following rules:

  • If head-head appears, I win $9 from you.
  • If tail-tail appears, I win $1 from you.
  • If head-tail or tail-head appears, you win $5 from me.

After the first round each of us discards his top coin, revealing the next coin in the stack, and we evaluate this new outcome according to the same rules. And so on, working our way down through the stacks.

This seems fair. There are four possible outcomes, all equally likely, and the payouts appear to be weighted so that in the long run we’ll both break even. But in fact you can arrange your stack so as to win 80 cents per round on average, no matter what I do.

Let t represent the fraction of your coins that display heads. If my coins are all heads, then your gain is given by

GH = -9t + 5(1 – t) = -14t + 5.

If my coins are all tails, then your gain is

GT = +5t – 1(1 – t) = 6t – 1.

If we let GH = GT, we get t = 0.3, and you gain GH = GT = $0.80.

This result applies to an entire stack or to any intermediate segment, which means that it works even if my stack is a mix of heads and tails. If you arrange your stack so that 3/10 of the coins, randomly distributed in the stack, display heads, then in a long sequence of rounds you’ll win 80 cents per round, no matter how I arrange my own stack.

(From J.P. Marques de Sá, Chance: The Life of Games & the Game of Life, 2008.)

Higher Things

http://www.bnl.gov/newsroom/news.php?a=1301

In 1966 a Swedish encyclopedia publisher requested a photograph of Richard Feynman “beating a drum” to give “a human approach to a presentation of the difficult matter that theoretical physics represents.” Feynman responded:

Dear Sir,

The fact that I beat a drum has nothing to do with the fact that I do theoretical physics. Theoretical physics is a human endeavor, one of the higher developments of human beings, and the perpetual desire to prove that people who do it are human by showing that they do other things that a few other human beings do (like playing bongo drums) is insulting to me.

I am human enough to tell you to go to hell.

Yours,

RPF

Single Cases

http://commons.wikimedia.org/wiki/File:Les_Joueurs_de_cartes.JPG

If we roll a fair die an infinite number of times, the outcome 4 occurs in 1/6 of the cases. In this light we can say that the probability of rolling a 4 with this die is 1/6. But suppose that, instead of repeating the experiment forever, we roll the die only once. Now it still seems natural to say that there’s a 1/6 chance of rolling a 4, but in fact either we’ll roll a 4 … or we won’t. Can it make sense to assign a probability to a single outcome? Charles Sanders Peirce writes:

If a man had to choose between drawing a card from a pack containing twenty-five red cards and a black one, or from a pack containing twenty-five black cards and a red one, and if the drawing of a red card were destined to transport him to eternal felicity, and that of a black one to consign him to everlasting woe, it would be folly to deny that he ought to prefer the pack containing the larger proportion of red cards, although, from the nature of the risk, it could not be repeated. It is not easy to reconcile this with our analysis of the conception of chance. But suppose he should choose the red pack, and should draw the wrong card, what consolation would he have? He might say that he had acted in accordance with reason, but that would only show that his reason was absolutely worthless. And if he should choose the right card, how could he regard it as anything but a happy accident? He could not say that if he had drawn from the other pack, he might have drawn the wrong one, because an hypothetical proposition such as, ‘if A, then B,’ means nothing with reference to a single case.

Peirce’s solution to this problem is curiously humanistic. Our inferences must extend to include the interests of all races in all epochs. A soldier storms a fort knowing that he may die but that his zeal, if carried through the regiment, will win the day. The man trying to draw a red card “cannot be logical so long as he is concerned only with his own fate” but “should care equally for what was to happen in all possible cases … and would draw from the pack with the most red cards.”

“He who would not sacrifice his own soul to save the whole world, is, as it seems to me, illogical in all his inferences, collectively.”

Pursuit of Truth

http://commons.wikimedia.org/wiki/File:Deiker_Jagdbare_Tiere_1093204.jpg

Can animals reason without using language? Sextus Empiricus writes:

[Chrysippus] declares that the dog makes use of the fifth complex indemonstrable syllogism when, on arriving at a spot where three ways meet …, after smelling at the two roads by which the quarry did not pass, he rushes off at once by the third without stopping to smell. For, says the old writer, the dog implicitly reasons thus: ‘The animal went either by this road, or by that, or by the other: but it did not go by this or that, therefore he went the other way.’

So, perhaps. There’s a limit, though.

The Lakes of Wada

http://commons.wikimedia.org/wiki/File:Lakes_of_Wada.jpg

Find a square island and establish a blue lake on it, bringing blue water within a certain distance of every point on the island’s remaining dry land. Then create a red lake, bringing red water even closer to every point on the remaining land, and a green lake bringing green water still closer.

If you continue this indefinitely, irrigating the island more and more aggressively from each lake in turn, you’ll reach the perplexing state where the three lakes have the same boundary. Japanese mathematician Kunizo Yoneyama offered this example in 1917.

Jury Duty

http://commons.wikimedia.org/wiki/File:The_Jury_by_John_Morgan.jpg

From Gábor J. Székely’s Paradoxes in Probability Theory and Mathematical Statistics, via Mark Chang’s Paradoxology of Scientific Inference:

A, B, C, D, and E make up a five-member jury. They’ll decide the guilt of a prisoner by a simple majority vote. The probability that A gives the wrong verdict is 5%; for B, C, and D it’s 10%; for E it’s 20%. When the five jurors vote independently, the probability that they’ll bring in the wrong verdict is about 1%. But if E (whose judgment is poorest) abandons his autonomy and echoes the vote of A (whose judgment is best), the chance of an error rises to 1.5%.

Even more surprisingly, if B, C, D, and E all follow A, then the chance of a bad verdict rises to 5%, five times worse than if they vote independently, even though A is nominally the best leader. Chang writes, “This paradox implies it is better to have your own opinion even if it is not as good as the leader’s opinion, in general.”