In fact, a superintelligent AI would easily see that the Pebble people are talking about prime numbers even if they didn’t see that themselves, so as long as they programmed the AI to make “correct” heaps, it certainly would not make heaps of 8, 9, or 1957 pebbles. So if anything, this supports my position: if you program an AI that can actually communicate with human beings, you will naturally program it with a similar morality, without even trying.
Apart from that, this post seems to support TGGP’s position. Even if there is some computation (i.e. primeness) which is actually determining the Pebble people, there is no particular reason to use that computation instead of some other. So if a random AI were programmed that purposely made non-prime heaps, there would be no objective problem with this. So Allan Crossman’s claim that “it’s positively dangerous to believe in an objective account of morality” is a completely subjective statement. It’s dangerous in comparison to your subjective idea of which heaps are correct, yet, but objectively there is nothing dangerous about non-prime heaps. So there’s no reason to program an AI without regard for Friendlieness. If there’s something matters, it will find it, and if nothing matters, well then nothing matters, not even being made into paperclips.
You are smart enough to tell that 8 pebbles is incorrect. Knowing that, will you dedicate your life to sorting pebbles into prime-numbered piles, or are you going to worry about humans? How can the pebble-sorters be so sure that they won’t get an AI like you?
Nobody’s arguing that a superintelligent AI won’t know what we want. The problem is that it might not care.
In fact, a superintelligent AI would easily see that the Pebble people are talking about prime numbers even if they didn’t see that themselves, so as long as they programmed the AI to make “correct” heaps, it certainly would not make heaps of 8, 9, or 1957 pebbles. So if anything, this supports my position: if you program an AI that can actually communicate with human beings, you will naturally program it with a similar morality, without even trying.
Apart from that, this post seems to support TGGP’s position. Even if there is some computation (i.e. primeness) which is actually determining the Pebble people, there is no particular reason to use that computation instead of some other. So if a random AI were programmed that purposely made non-prime heaps, there would be no objective problem with this. So Allan Crossman’s claim that “it’s positively dangerous to believe in an objective account of morality” is a completely subjective statement. It’s dangerous in comparison to your subjective idea of which heaps are correct, yet, but objectively there is nothing dangerous about non-prime heaps. So there’s no reason to program an AI without regard for Friendlieness. If there’s something matters, it will find it, and if nothing matters, well then nothing matters, not even being made into paperclips.
You are smart enough to tell that 8 pebbles is incorrect. Knowing that, will you dedicate your life to sorting pebbles into prime-numbered piles, or are you going to worry about humans? How can the pebble-sorters be so sure that they won’t get an AI like you?
Nobody’s arguing that a superintelligent AI won’t know what we want. The problem is that it might not care.