I think you’re vastly underestimating the magnitude of my understanding.
In the context of something so shocking as having our naive childhood dreams broken, is there some superintelligent solution that’s supposed to be more advanced that consoling you in your moment of grief? To be completely honest, I wouldn’t expect a humanity CEV agent to even bother trying to console us; we can do that for each other and it knows this well in advance, it’s got bigger problems to worry about.
Do you mean to suggest that a superintelligent agent wouldn’t be able to foresee or provide solutions to some problem that we are capable of dreaming up today?
You’ll have to forgive me, but I’m not seeing what it is about my comment that gives you reason think I’m misunderstanding anything here. Do you expect an agent optimized to humanity’s CEV is going to use inoptimal strategies for some reason? Will it give a helping interstellar spaceship when really all it needed to do to effectively solve whatever spaceflight-unrelated microproblem in our psychology that exists at the moment before it’s solved the problem was a simple pat on the back?
is there some superintelligent solution that’s supposed to be more advanced that consoling you in your moment of grief?
Yes.
Do you mean to suggest that a superintelligent agent wouldn’t be able to foresee or provide solutions to some problem that we are capable of dreaming up today?
No.
Do you expect an agent optimized to humanity’s CEV is going to use inoptimal strategies for some reason?
No.
Will it give a helping interstellar spaceship when really all it needed to do to effectively solve whatever spaceflight-unrelated microproblem in our psychology that exists at the moment before it’s solved the problem was a simple pat on the back?
I think you’re vastly underestimating the magnitude of my understanding.
In the context of something so shocking as having our naive childhood dreams broken, is there some superintelligent solution that’s supposed to be more advanced that consoling you in your moment of grief? To be completely honest, I wouldn’t expect a humanity CEV agent to even bother trying to console us; we can do that for each other and it knows this well in advance, it’s got bigger problems to worry about.
Do you mean to suggest that a superintelligent agent wouldn’t be able to foresee or provide solutions to some problem that we are capable of dreaming up today?
You’ll have to forgive me, but I’m not seeing what it is about my comment that gives you reason think I’m misunderstanding anything here. Do you expect an agent optimized to humanity’s CEV is going to use inoptimal strategies for some reason? Will it give a helping interstellar spaceship when really all it needed to do to effectively solve whatever spaceflight-unrelated microproblem in our psychology that exists at the moment before it’s solved the problem was a simple pat on the back?
Yes.
No.
No.
No.
Fair enough.