A few day ago, I saw an interesting article on a site somewhat related to lesswrong. Unfortunately I didn’t have the time to read it, so I bookmarked it.
Computer crashed, lost my last bookmarks and now I spent 2 hours trying to find this article, without luck. Here is the idea of the article, in a nutshell : we human are somewhat a king of learning machine, trying to build a model of the “reality”. In ML, overfitting means that in insisting too much on fitting the data, we actually get a worse out-of-sample performance (because we start to fit the modeling noise and the stochastic noise). By carrying this ML idea into the human realm, we can argue that insisting too much on consistency can be a liability rather than an asset in our model-building.
Does that decription rings someone bells ? If yes, please link the article :)
A few day ago, I saw an interesting article on a site somewhat related to lesswrong. Unfortunately I didn’t have the time to read it, so I bookmarked it.
Computer crashed, lost my last bookmarks and now I spent 2 hours trying to find this article, without luck. Here is the idea of the article, in a nutshell : we human are somewhat a king of learning machine, trying to build a model of the “reality”. In ML, overfitting means that in insisting too much on fitting the data, we actually get a worse out-of-sample performance (because we start to fit the modeling noise and the stochastic noise). By carrying this ML idea into the human realm, we can argue that insisting too much on consistency can be a liability rather than an asset in our model-building.
Does that decription rings someone bells ? If yes, please link the article :)