I think Scott’s argument is totally reasonable, well-stated and I agree with his conclusion. So it was pretty dismaying to see how many of his commenters are dismissing the argument completely, making arguments which were demolished in Eliezer’s OB sequences.
Some familiar arguments I saw in the comments:
Intelligence, like, isn’t even real, man.
If a machine is smarter than humans, it has every right to destroy us.
This is weird, obviously you are in a cult.
Machines can’t be sentient, therefore AI is impossible for some reason.
AIs can’t possibly get out of the box, we would just pull the plug.
Who are we to impose our values on an AI? That’s like something a mean dad would do.
I think Scott’s argument is totally reasonable, well-stated and I agree with his conclusion. So it was pretty dismaying to see how many of his commenters are dismissing the argument completely, making arguments which were demolished in Eliezer’s OB sequences.
Some familiar arguments I saw in the comments:
Intelligence, like, isn’t even real, man.
If a machine is smarter than humans, it has every right to destroy us.
This is weird, obviously you are in a cult.
Machines can’t be sentient, therefore AI is impossible for some reason.
AIs can’t possibly get out of the box, we would just pull the plug.
Who are we to impose our values on an AI? That’s like something a mean dad would do.
There’s also better arguments, like
“We wouldn’t build a god AI and put it in charge of the world”
“We would make some sort of attempt at installing safety overrides”
″ Tool AI is safer and easier, and easier to make safe, and wouldn’t need goals to be aligned with ours”
“Well be making ourselves smarter in parallel”