Ah, if you’re looking for newer stuff, Stanovich’s work has been really useful for getting some fine-grained distinctions between rationality and intelligence, and together with Evans, Stanovich did some great work on advancing the dual process theory. However, there is much more work to be done, of course. Perhaps others can help out with other useful citations?
Unfortunately, it still doesn’t answer the question I actually asked. I know damn well I am not a perfectly rational robot. What I’m asking is: what’s the cog-sci behind how I can check that my current conscious desires or goals accord with what is actually, objectively, good for me? “You are not a perfect goal-seeking robot” is simply repeating the question in declarative form.
how I can check that my current conscious desires or goals accord with what is actually, objectively, good for me?
Is this a question that cog-sci can answer?
Let us assume that I decide I really like a new type of food. It may be apples, it may be cream doughnuts. If I ask “is eating this good for me?”, then that’s more a question for a dietician to answer, instead of a cognitive psychologist, surely?
Am I to be restricted to your current knowledge, and to the deductions you have made from information available to you, or can I introduce principles of, for example, physics or information theory or even dietary science not currently present in your mind?
I do believe that this has answered the other question I asked of you, with regards to after how long to consider the future-you who would be endorsing or not endorsing a given course of action; I understand from this comment that the future-you to choose is the limit future-you at time t as t approaches infinity. This implies that one possible answer to your question would be to imagine yourself at age ninety, and consider what it is that you would most or least appreciate having done at that age. (When I try this, I find that exercise and a healthy diet become very important; I do not wish to be old and frail at ninety. Old may not be avoidable, but frail certainly is...).
Am I to be restricted to your current knowledge, and to the deductions you have made from information available to you, or can I introduce principles of, for example, physics or information theory or even dietary science not currently present in your mind?
You are utterly unlimited in introducing additional knowledge. It just has to be true, is all. Introducing the dietary science on whether I should eat tuna, salmon, hummus, meat, egg, or just plain salad and bread for lunch is entirely allowed, despite my currently going by a heuristic of “tuna sandwiches with veggies on them are really tasty and reasonably healthful.”
I understand from this comment that the future-you to choose is the limit future-you at time t as t approaches infinity. This implies that one possible answer to your question would be to imagine yourself at age ninety, and consider what it is that you would most or least appreciate having done at that age. (When I try this, I find that exercise and a healthy diet become very important; I do not wish to be old and frail at ninety. Old may not be avoidable, but frail certainly is...).
This is roughly my line of reasoning as well. What I find interesting is that:
A) People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.
B) Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, “I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know.”
Hm, I wonder about orienting toward the 90-year-old self. When I model myself, I at 90 would have liked to know that I lived a life that I consider fulfilling, and that may involve exercise and healthy diet, but also good social connections, and knowledge that I made a positive impact on the world, for example through Intentional Insights. Ideally, I would continue to live beyond 90, though, and that may involve cryonics or maybe even a friendly AI helping us all live forever—go MIRI!
Uhhh… sounds good to me. Well, sounds like the standard LW party-line to me, but it’s also all actually good. Sometimes the simple answer is the right one, after all.
You are utterly unlimited in introducing additional knowledge. It just has to be true, is all.
Hmmm. This makes finding the correct answer very tricky, since in order to be completely correct I have to factor in the entirety of, well, everything that is true.
The best I’d be able to practically manage is heuristics.
People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.
Other people are more alien than… well, than most people realise. I often find data to support this hypothesis.
Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, “I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know.”
I don’t think it’s possible to do better than heuristics, the only question is how good your heuristics are. And your heuristics are dependent on your knowledge; learning more, either through formal education or practical experience, will help to refine those heuristics.
Hmmm… which is a pretty good reason for further education.
Ah, if you’re looking for newer stuff, Stanovich’s work has been really useful for getting some fine-grained distinctions between rationality and intelligence, and together with Evans, Stanovich did some great work on advancing the dual process theory. However, there is much more work to be done, of course. Perhaps others can help out with other useful citations?
Reminder: the downvote button is not a disagreement button. Knock it off, whoever you are.
Thanks, appreciate it!
Unfortunately, it still doesn’t answer the question I actually asked. I know damn well I am not a perfectly rational robot. What I’m asking is: what’s the cog-sci behind how I can check that my current conscious desires or goals accord with what is actually, objectively, good for me? “You are not a perfect goal-seeking robot” is simply repeating the question in declarative form.
Is this a question that cog-sci can answer?
Let us assume that I decide I really like a new type of food. It may be apples, it may be cream doughnuts. If I ask “is eating this good for me?”, then that’s more a question for a dietician to answer, instead of a cognitive psychologist, surely?
Assume that you must construct the “actually good” from my actual mind and its actual terminal preferences.
How about introducing the possibility of shifting terminal preferences?
Am I to be restricted to your current knowledge, and to the deductions you have made from information available to you, or can I introduce principles of, for example, physics or information theory or even dietary science not currently present in your mind?
I do believe that this has answered the other question I asked of you, with regards to after how long to consider the future-you who would be endorsing or not endorsing a given course of action; I understand from this comment that the future-you to choose is the limit future-you at time t as t approaches infinity. This implies that one possible answer to your question would be to imagine yourself at age ninety, and consider what it is that you would most or least appreciate having done at that age. (When I try this, I find that exercise and a healthy diet become very important; I do not wish to be old and frail at ninety. Old may not be avoidable, but frail certainly is...).
You are utterly unlimited in introducing additional knowledge. It just has to be true, is all. Introducing the dietary science on whether I should eat tuna, salmon, hummus, meat, egg, or just plain salad and bread for lunch is entirely allowed, despite my currently going by a heuristic of “tuna sandwiches with veggies on them are really tasty and reasonably healthful.”
This is roughly my line of reasoning as well. What I find interesting is that:
A) People refuse to employ this simple and elegant line of reasoning when figuring out what to do, as if a decision-making criterion must be nonnatural.
B) Actually making the prediction is very hard, and what we practically end up doing is using heuristics that roughly guarantee, “I will not regret this decision too much, unless I gain sufficient additional knowledge to override almost everything I currently know.”
Hm, I wonder about orienting toward the 90-year-old self. When I model myself, I at 90 would have liked to know that I lived a life that I consider fulfilling, and that may involve exercise and healthy diet, but also good social connections, and knowledge that I made a positive impact on the world, for example through Intentional Insights. Ideally, I would continue to live beyond 90, though, and that may involve cryonics or maybe even a friendly AI helping us all live forever—go MIRI!
Uhhh… sounds good to me. Well, sounds like the standard LW party-line to me, but it’s also all actually good. Sometimes the simple answer is the right one, after all.
Hmmm. This makes finding the correct answer very tricky, since in order to be completely correct I have to factor in the entirety of, well, everything that is true.
The best I’d be able to practically manage is heuristics.
Other people are more alien than… well, than most people realise. I often find data to support this hypothesis.
I don’t think it’s possible to do better than heuristics, the only question is how good your heuristics are. And your heuristics are dependent on your knowledge; learning more, either through formal education or practical experience, will help to refine those heuristics.
Hmmm… which is a pretty good reason for further education.