An agent optimized to humanity’s CEV would instantly recognize that trying to skip ahead would be incredibly harmful to our present psychology; without dreams—however irrational—we don’t tend to develop well in terms of CEV. If all of our values break down over time, a superintelligent agent optimized for our CEV will plan for the day our dreams are broken, and may be able to give us a helping hand and a pat on the back to let us know that there are still reasons to live.
This sounds like the same manner of fallacy associated with determinism and the ignorance of the future being derived from the past though the present rather than by a timeless external “Determinator.”
I think you’re vastly underestimating the magnitude of that “helping hand.”
By way of analogy… a superintelligent agent optimized for (or, more to the point, optimizing for) solar system colonization might well conclude that establishing human colonies on Mars is incredibly harmful to our present physiology, since without oxygen we don’t tend to develop well in terms of breathing. It might then develop techniques to alter our lungs, or alter the environment of Mars in such a way that our lungs can function better there (e.g., oxygenate it).
An agent optimizing for something that relates to our psychology, rather than our physiology, might similarly develop techniques to alter our minds, or alter our environment in such a way that our minds can function better.
I think you’re vastly underestimating the magnitude of my understanding.
In the context of something so shocking as having our naive childhood dreams broken, is there some superintelligent solution that’s supposed to be more advanced that consoling you in your moment of grief? To be completely honest, I wouldn’t expect a humanity CEV agent to even bother trying to console us; we can do that for each other and it knows this well in advance, it’s got bigger problems to worry about.
Do you mean to suggest that a superintelligent agent wouldn’t be able to foresee or provide solutions to some problem that we are capable of dreaming up today?
You’ll have to forgive me, but I’m not seeing what it is about my comment that gives you reason think I’m misunderstanding anything here. Do you expect an agent optimized to humanity’s CEV is going to use inoptimal strategies for some reason? Will it give a helping interstellar spaceship when really all it needed to do to effectively solve whatever spaceflight-unrelated microproblem in our psychology that exists at the moment before it’s solved the problem was a simple pat on the back?
is there some superintelligent solution that’s supposed to be more advanced that consoling you in your moment of grief?
Yes.
Do you mean to suggest that a superintelligent agent wouldn’t be able to foresee or provide solutions to some problem that we are capable of dreaming up today?
No.
Do you expect an agent optimized to humanity’s CEV is going to use inoptimal strategies for some reason?
No.
Will it give a helping interstellar spaceship when really all it needed to do to effectively solve whatever spaceflight-unrelated microproblem in our psychology that exists at the moment before it’s solved the problem was a simple pat on the back?
An agent optimized to humanity’s CEV would instantly recognize that trying to skip ahead would be incredibly harmful to our present psychology; without dreams—however irrational—we don’t tend to develop well in terms of CEV. If all of our values break down over time, a superintelligent agent optimized for our CEV will plan for the day our dreams are broken, and may be able to give us a helping hand and a pat on the back to let us know that there are still reasons to live.
This sounds like the same manner of fallacy associated with determinism and the ignorance of the future being derived from the past though the present rather than by a timeless external “Determinator.”
I think you’re vastly underestimating the magnitude of that “helping hand.”
By way of analogy… a superintelligent agent optimized for (or, more to the point, optimizing for) solar system colonization might well conclude that establishing human colonies on Mars is incredibly harmful to our present physiology, since without oxygen we don’t tend to develop well in terms of breathing. It might then develop techniques to alter our lungs, or alter the environment of Mars in such a way that our lungs can function better there (e.g., oxygenate it).
An agent optimizing for something that relates to our psychology, rather than our physiology, might similarly develop techniques to alter our minds, or alter our environment in such a way that our minds can function better.
I think you’re vastly underestimating the magnitude of my understanding.
In the context of something so shocking as having our naive childhood dreams broken, is there some superintelligent solution that’s supposed to be more advanced that consoling you in your moment of grief? To be completely honest, I wouldn’t expect a humanity CEV agent to even bother trying to console us; we can do that for each other and it knows this well in advance, it’s got bigger problems to worry about.
Do you mean to suggest that a superintelligent agent wouldn’t be able to foresee or provide solutions to some problem that we are capable of dreaming up today?
You’ll have to forgive me, but I’m not seeing what it is about my comment that gives you reason think I’m misunderstanding anything here. Do you expect an agent optimized to humanity’s CEV is going to use inoptimal strategies for some reason? Will it give a helping interstellar spaceship when really all it needed to do to effectively solve whatever spaceflight-unrelated microproblem in our psychology that exists at the moment before it’s solved the problem was a simple pat on the back?
Yes.
No.
No.
No.
Fair enough.