I’m thinking relative to what would happen if we tried to hard-code the AI with a utility function like e.g. hedonistic utilitarianism. That would be much much worse than the NSA. Worst thing that would happen with the NSA is a aristocratic galactic police state. Right? Tell me how you disagree.
In the space of possible futures, it is much better than e.g. tiling the universe with orgasmium. So much better, in fact, that in the grand scheme of things it counts as OK.
No, I really don’t think so.
I’m thinking relative to what would happen if we tried to hard-code the AI with a utility function like e.g. hedonistic utilitarianism. That would be much much worse than the NSA. Worst thing that would happen with the NSA is a aristocratic galactic police state. Right? Tell me how you disagree.
The NSA does invest money into building artificial intelligence. Having a powerful NSA might increase chances of UFAIs.
To quote Orwell, If you want a vision of the future, imagine a boot stamping on a human face—forever.
That’s not an “OK future”.
In the space of possible futures, it is much better than e.g. tiling the universe with orgasmium. So much better, in fact, that in the grand scheme of things it counts as OK.
I evaluate an “OK future” on an absolute scale, not relative.
Relative scales lead you there.