Monday, August 06, 2007

Even the neural net incorrectly believes it sees a pattern

You can click through for the actual paper. Abstract excerpt:

I show that while the network initially makes weak predictions (in the middle of the probability range) regardless of input, after observing randomly generated data it learns to be overconfident in the sense that when presented with other, unrelated random data it makes strong predictions. The model matches behavioral data in that it shows overconfidence growing with experience and then, eventually, declining. The model shows how overconfidence, far from being a surprising fallacy, can be seen as a natural outgrowth of statistical over-fitting in the brain.
The post that this title points to intros papers from up-and-coming economist, E. Glen Weyl.

No comments: