I’m not sure I understand how sentience has anything to do with anything (even if we knew what it was). I’m sentient, but cows would continue to taste yummy if I thought they were sentient (I’m not saying I’d still eat them, of course).
Anyways, why not build an AI who’s goal was to non-coercively increase the intelligence of mankind? You don’t have to worry about its utility function being compatible with ours in that case. Sure I don’t know how we’d go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.
Anyways, why not build an AI who’s goal was to non-coercively increase the intelligence of mankind?
It’s not going to make you more powerful than it if it’s going to limit its ability to make you more intelligent in the future. It will make sure it’s intelligent enough to convince you to accept the modifications it wants you to have until it convinces you to accept the one that gives you its utility function.
I’m not sure I understand how sentience has anything to do with anything (even if we knew what it was). I’m sentient, but cows would continue to taste yummy if I thought they were sentient (I’m not saying I’d still eat them, of course).
Anyways, why not build an AI who’s goal was to non-coercively increase the intelligence of mankind? You don’t have to worry about its utility function being compatible with ours in that case. Sure I don’t know how we’d go about making human intelligence more easily modified (as I have no idea what sentience is), but a super-intelligence might be able to figure it out.
It’s not going to make you more powerful than it if it’s going to limit its ability to make you more intelligent in the future. It will make sure it’s intelligent enough to convince you to accept the modifications it wants you to have until it convinces you to accept the one that gives you its utility function.