For any of the many disanalogies one could mention. I bet organizations would work a lot better if they could only brainwash employees into valuing nothing but the good of the organization—and that’s just one nugatory difference between AIs (uploads or de novo) and organizations.
The idea that machine intelligences won’t delegate work to other agents with different values seems terribly speculative to me. I don’t think it counts as admissable evidence.
The idea that machine intelligences won’t delegate work to other agents with different values seems terribly speculative to me. I don’t think it counts as admissable evidence.
Why would they permit agents with different values? If you’re implicitly thinking in some Hansonian upload model, modifying an instance to share your values and be trustworthy would be quite valuable and a major selling point, since so much of the existing economy is riven with principal-agent problems and devoted to ‘guard labor’.
Why would they permit agents with different values?
Agents may not fuse together for the same reason that companies today do not: they are prevented from doing so by a monopolies commission that exists to preserve diversity and prevent a monoculture. In which case, they’ll have to trade with and delegate to other agents to get what they want.
If you’re implicitly thinking in some Hansonian upload model [...]
It’s at least possible that the machine intelligences would have some respect for the universe being bigger than their points of view, so that there’s some gain from permitting variation. It’s hard to judge how much variation is a win, though.
The idea that machine intelligences won’t delegate work to other agents with different values seems terribly speculative to me. I don’t think it counts as admissable evidence.
Why would they permit agents with different values? If you’re implicitly thinking in some Hansonian upload model, modifying an instance to share your values and be trustworthy would be quite valuable and a major selling point, since so much of the existing economy is riven with principal-agent problems and devoted to ‘guard labor’.
Agents may not fuse together for the same reason that companies today do not: they are prevented from doing so by a monopolies commission that exists to preserve diversity and prevent a monoculture. In which case, they’ll have to trade with and delegate to other agents to get what they want.
That doesn’t sound like me: Tim Tyler: Against whole brain emulation.
It’s at least possible that the machine intelligences would have some respect for the universe being bigger than their points of view, so that there’s some gain from permitting variation. It’s hard to judge how much variation is a win, though.