Haha, but seriously. The NSA probably meets the technical definition of friendliness, right? If it was given ultimate power, we would have an OK future.
I’m thinking relative to what would happen if we tried to hard-code the AI with a utility function like e.g. hedonistic utilitarianism. That would be much much worse than the NSA. Worst thing that would happen with the NSA is a aristocratic galactic police state. Right? Tell me how you disagree.
In the space of possible futures, it is much better than e.g. tiling the universe with orgasmium. So much better, in fact, that in the grand scheme of things it counts as OK.
Then why haven’t they?
Because they are friendly?
Seriously, they probably do believe in upholding the law and sticking to their original mission, at least to some extent.
/facepalm
Haha, but seriously. The NSA probably meets the technical definition of friendliness, right? If it was given ultimate power, we would have an OK future.
No, I really don’t think so.
I’m thinking relative to what would happen if we tried to hard-code the AI with a utility function like e.g. hedonistic utilitarianism. That would be much much worse than the NSA. Worst thing that would happen with the NSA is a aristocratic galactic police state. Right? Tell me how you disagree.
The NSA does invest money into building artificial intelligence. Having a powerful NSA might increase chances of UFAIs.
To quote Orwell, If you want a vision of the future, imagine a boot stamping on a human face—forever.
That’s not an “OK future”.
In the space of possible futures, it is much better than e.g. tiling the universe with orgasmium. So much better, in fact, that in the grand scheme of things it counts as OK.
I evaluate an “OK future” on an absolute scale, not relative.
Relative scales lead you there.
It’s would resemble declaring war.
https://xkcd.com/792/ might explain it. ;)