Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes—does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?
Practical importance for what purpose? Whatever that purpose is, adding heuristics that optimize the learning heuristics for better fulfillment of that purpose would be fruitful for that purpose.
It would be of practical importance to the extent the original implementation of the learning heuristics is suboptimal, and to the extent the implementable learning-heuristic-improving heuristics can work on that. If you are talking of autonomous agents, self-improvement is a necessity, because you need open-ended potential for further improvement. If you are talking about non-autonomous tools people write, it’s often difficult to construct useful heuristic-improvement heuristics. But of course their partially-optimized structure is already chosen while making use of the values that they’re optimized for, purpose in the designers.
Practical importance for what purpose? Whatever that purpose is, adding heuristics that optimize the learning heuristics for better fulfillment of that purpose would be fruitful for that purpose.
It would be of practical importance to the extent the original implementation of the learning heuristics is suboptimal, and to the extent the implementable learning-heuristic-improving heuristics can work on that. If you are talking of autonomous agents, self-improvement is a necessity, because you need open-ended potential for further improvement. If you are talking about non-autonomous tools people write, it’s often difficult to construct useful heuristic-improvement heuristics. But of course their partially-optimized structure is already chosen while making use of the values that they’re optimized for, purpose in the designers.