I think you’re vastly underestimating the magnitude of that “helping hand.”
By way of analogy… a superintelligent agent optimized for (or, more to the point, optimizing for) solar system colonization might well conclude that establishing human colonies on Mars is incredibly harmful to our present physiology, since without oxygen we don’t tend to develop well in terms of breathing. It might then develop techniques to alter our lungs, or alter the environment of Mars in such a way that our lungs can function better there (e.g., oxygenate it).
An agent optimizing for something that relates to our psychology, rather than our physiology, might similarly develop techniques to alter our minds, or alter our environment in such a way that our minds can function better.
I think you’re vastly underestimating the magnitude of my understanding.
In the context of something so shocking as having our naive childhood dreams broken, is there some superintelligent solution that’s supposed to be more advanced that consoling you in your moment of grief? To be completely honest, I wouldn’t expect a humanity CEV agent to even bother trying to console us; we can do that for each other and it knows this well in advance, it’s got bigger problems to worry about.
Do you mean to suggest that a superintelligent agent wouldn’t be able to foresee or provide solutions to some problem that we are capable of dreaming up today?
You’ll have to forgive me, but I’m not seeing what it is about my comment that gives you reason think I’m misunderstanding anything here. Do you expect an agent optimized to humanity’s CEV is going to use inoptimal strategies for some reason? Will it give a helping interstellar spaceship when really all it needed to do to effectively solve whatever spaceflight-unrelated microproblem in our psychology that exists at the moment before it’s solved the problem was a simple pat on the back?
is there some superintelligent solution that’s supposed to be more advanced that consoling you in your moment of grief?
Yes.
Do you mean to suggest that a superintelligent agent wouldn’t be able to foresee or provide solutions to some problem that we are capable of dreaming up today?
No.
Do you expect an agent optimized to humanity’s CEV is going to use inoptimal strategies for some reason?
No.
Will it give a helping interstellar spaceship when really all it needed to do to effectively solve whatever spaceflight-unrelated microproblem in our psychology that exists at the moment before it’s solved the problem was a simple pat on the back?
I think you’re vastly underestimating the magnitude of that “helping hand.”
By way of analogy… a superintelligent agent optimized for (or, more to the point, optimizing for) solar system colonization might well conclude that establishing human colonies on Mars is incredibly harmful to our present physiology, since without oxygen we don’t tend to develop well in terms of breathing. It might then develop techniques to alter our lungs, or alter the environment of Mars in such a way that our lungs can function better there (e.g., oxygenate it).
An agent optimizing for something that relates to our psychology, rather than our physiology, might similarly develop techniques to alter our minds, or alter our environment in such a way that our minds can function better.
I think you’re vastly underestimating the magnitude of my understanding.
In the context of something so shocking as having our naive childhood dreams broken, is there some superintelligent solution that’s supposed to be more advanced that consoling you in your moment of grief? To be completely honest, I wouldn’t expect a humanity CEV agent to even bother trying to console us; we can do that for each other and it knows this well in advance, it’s got bigger problems to worry about.
Do you mean to suggest that a superintelligent agent wouldn’t be able to foresee or provide solutions to some problem that we are capable of dreaming up today?
You’ll have to forgive me, but I’m not seeing what it is about my comment that gives you reason think I’m misunderstanding anything here. Do you expect an agent optimized to humanity’s CEV is going to use inoptimal strategies for some reason? Will it give a helping interstellar spaceship when really all it needed to do to effectively solve whatever spaceflight-unrelated microproblem in our psychology that exists at the moment before it’s solved the problem was a simple pat on the back?
Yes.
No.
No.
No.
Fair enough.