I’m having trouble seeing how nonparametric methods can deal with regions far away from existing data points.
With very wide predictive distributions, if they are Bayesian nonparametric methods. See the 95% credible intervals (shaded pink) in Figure 2 on page 4, and in Figure 3 on page 5, of Mark Ebden’s Gaussian Processes for Regression: A Quick Introduction.
(Carl Edward Rasmussen at Cambridge and Arman Melkumyan at the University of Sydney maintain sites with more links about Gaussian processes and Bayesian nonparametric regression. Also see Bayesian neural networks which can justifiably extrapolate sharper predictive distributions than Gaussian process priors can.)
[. . .] we look at two quantitative tests of Gaussian processes as an account of human function learning: reproducing the order of difficulty of learning functions of different types, and extrapolation performance. [. . .]
Predicting and explaining people’s capacity for generalization – from stimulus-response pairs to judgments about a functional relationship between variables – is the second key component of our account. This capacity is assessed in the way in which people extrapolate, making judgments about stimuli they have not encountered before. [. . .] Both people and the model extrapolate near optimally on the linear function, and reasonably accurate extrapolation also occurs for the exponential and quadratic function. However, there is a bias towards a linear slope in the extrapolation of the exponential and quadratic functions[. . .]
The first author, Tom Griffiths, is the director of the Computational Cognitive Science Lab at UC Berkeley, and Lucas and Williams are graduate students there. The work of the Computational Cognitive Science Lab is very close to the mission of Less Wrong:
The basic goal of our research is understanding the computational and statistical foundations of human inductive inference, and using this understanding to develop both better accounts of human behavior and better automated systems [. . .]
For inductive problems, this usually means developing models based on the principles of probability theory, and exploring how ideas from artificial intelligence, machine learning, and statistics (particularly Bayesian statistics) connect to human cognition. We test these models through experiments with human subjects[. . .]
Probabilistic models provide a way to explore many of the questions that are at the heart of cognitive science. [. . .]
With very wide predictive distributions, if they are Bayesian nonparametric methods. See the 95% credible intervals (shaded pink) in Figure 2 on page 4, and in Figure 3 on page 5, of Mark Ebden’s Gaussian Processes for Regression: A Quick Introduction.
(Carl Edward Rasmussen at Cambridge and Arman Melkumyan at the University of Sydney maintain sites with more links about Gaussian processes and Bayesian nonparametric regression. Also see Bayesian neural networks which can justifiably extrapolate sharper predictive distributions than Gaussian process priors can.)
See also Modeling human function learning with Gaussian processes, by Tom Griffiths, Chris Lucas, Joseph Jay Williams, and Michael Kalish, in NIPS 21:
The first author, Tom Griffiths, is the director of the Computational Cognitive Science Lab at UC Berkeley, and Lucas and Williams are graduate students there. The work of the Computational Cognitive Science Lab is very close to the mission of Less Wrong:
Griffiths’s page recommends the foundations section of the lab publication list.