Both really. How much time should we dedicate to making our map fit the territory before we start sacrificing optimality? Spend too long trying to improve epistemic rationality and you begin to sacrifice your ability to get to work on actual goal seeking.
On the other end, if you don’t spend long enough to improve your map, you may be inefficiently or ineffectively trying to reach your goals.
We’re still thinking of ways to be able to quantify these. Largely it depends on the specific goal and map/territory as well as the person.
In AI, this is known as the exploration/exploitation problem. You could try Googling “Multi-armed bandit” for an extremely theoretical view.
My biggest recommendation is to do a breadth-first search, using fermi calculations for value-of-information. If people would be interested, I could maybe write a guide on how to do this more concretely?
First, understand the domain of the problem so you can identify poential downsides. Is this area Black Swan prone? Does this resemble Newcomb’s problem at all? What do (I think) the shape of risk is here?
For most things people need to do in daily life, we might just consider the cost of further optimization against cost of remaining ignorant and being wrong as a result of that ignorance. It can ne good to be aware of the biases that Prospect Theory talks about—am I putting off reasonably winning big because I’m so afraid of losing pennies?
Both really. How much time should we dedicate to making our map fit the territory before we start sacrificing optimality? Spend too long trying to improve epistemic rationality and you begin to sacrifice your ability to get to work on actual goal seeking.
On the other end, if you don’t spend long enough to improve your map, you may be inefficiently or ineffectively trying to reach your goals.
We’re still thinking of ways to be able to quantify these. Largely it depends on the specific goal and map/territory as well as the person.
Anybody else have some ideas?
In AI, this is known as the exploration/exploitation problem. You could try Googling “Multi-armed bandit” for an extremely theoretical view.
My biggest recommendation is to do a breadth-first search, using fermi calculations for value-of-information. If people would be interested, I could maybe write a guide on how to do this more concretely?
First, understand the domain of the problem so you can identify poential downsides. Is this area Black Swan prone? Does this resemble Newcomb’s problem at all? What do (I think) the shape of risk is here?
For most things people need to do in daily life, we might just consider the cost of further optimization against cost of remaining ignorant and being wrong as a result of that ignorance. It can ne good to be aware of the biases that Prospect Theory talks about—am I putting off reasonably winning big because I’m so afraid of losing pennies?