Suppose a local bar has 100 regular patrons. The bar is rather small, and going there is enjoyable on a given night only if fewer than 60 people show up. This is a problem: I want to go to the bar, but I expect to enjoy it only if you don’t come. But I know that you’re thinking the same thing about me. If no one communicates in advance, how many people will tend to turn up at the bar?

In a 1994 computer experiment, Stanford University economist W. Brian Arthur assigned each patron a set of plausible predictive rules on which it might base its decision. One rule might predict that next week’s attendance will be the same as last week’s, while another might take a rounded average of the last four weeks, and so on. Then, in practice, each patron would downgrade its badly performing rules and promote the more successful ones, and revise these ratings continually.

What he found is that the mean attendance converges to about 60, forming an “emergent ecology” that Arthur said is “almost organic in nature.” The population of active predictors splits into a 60/40 ratio, even though it keeps changing in membership forever. “To get some understanding of how this happens, suppose that 70 percent of their predictors forecasted above 60 for a longish time. Then on average only 30 people would show up; but this would validate predictors that forecasted close to 30 and invalidate the above-60 predictors, restoring the ‘ecological’ balance among predictions, so to speak.”

This is heartening to see, because life is full of such murky decisions. “There is no deductively rational solution — no ‘correct’ expectational model,” Arthur writes. “From the agents’ viewpoint, the problem is ill-defined, and they are propelled into a world of induction.”

(W. Brian Arthur, “Inductive Reasoning and Bounded Rationality,” *American Economic Review* 84:2 [May 1994], 406-411.)