I’m increasingly baffled as to why AI is always brought in to discussions of metaethics. Societies of rational agents need ethics to regulate their conduct. Out AIs aren’t sophisticated enough to live in their own socieities. A wireheading AI isn’t even going to be able to survive “in the wild”. If you could build an artificial society of AI, then the questions of whether they spontaneously evolved ethics would be a very interesting and relevant datum. But AIs as we know them aren’t good models for the kinds of entities to which
morality is relevant. And Clippy is particularly exceptional example of an AI. So why do people keep saying “Ah, but Clippy...”...?
And Clippy is particularly exceptional example of an AI. So why do people keep saying “Ah, but Clippy...”...?
Well, in this case it’s because the post I was responding to mentioned Clippy a couple of times, so I thought it’d be worthwhile to mention how the little bugger fits into the overall picture of value stability. It’s indeed somewhat tangential to the main point I was trying to make; paperclippers don’t have anything to do with value drift (they’re an example of a different failure mode in artificial ethics) and they’re unlikely to evolve from a changing value system.
Sorry..did you mean FAI is about societies, or FAI is about singletons?
But if ethics does emerge as an organisational principle in socieities, that’s all you need for FAI. You don’t even to to worry about one sociopathic AI turning unfriendly, because the majority will be able to restrain it.
UFAI is about singletons. If you have an AI society whose members compare notes and share information—which ins isntrumentally useful for them anyway—your reduce the probability of singleton fooming.
Any agent that fooms becomes a singleton. Thus, it doesn’t matter if they acted nice while in a society; all that matters is whether they act nice as a singleton.
An agent in a society is unable to force its values on the society; it needs to cooperate with the rest of society. A singleton is able to force its values on the rest of society.
I’m increasingly baffled as to why AI is always brought in to discussions of metaethics. Societies of rational agents need ethics to regulate their conduct. Out AIs aren’t sophisticated enough to live in their own socieities. A wireheading AI isn’t even going to be able to survive “in the wild”. If you could build an artificial society of AI, then the questions of whether they spontaneously evolved ethics would be a very interesting and relevant datum. But AIs as we know them aren’t good models for the kinds of entities to which morality is relevant. And Clippy is particularly exceptional example of an AI. So why do people keep saying “Ah, but Clippy...”...?
Well, in this case it’s because the post I was responding to mentioned Clippy a couple of times, so I thought it’d be worthwhile to mention how the little bugger fits into the overall picture of value stability. It’s indeed somewhat tangential to the main point I was trying to make; paperclippers don’t have anything to do with value drift (they’re an example of a different failure mode in artificial ethics) and they’re unlikely to evolve from a changing value system.
Key word here being “societies”. That is, not singletons. A lot of the discussion on metaethics here is implicitly aimed at FAI.
Sorry..did you mean FAI is about societies, or FAI is about singletons?
But if ethics does emerge as an organisational principle in socieities, that’s all you need for FAI. You don’t even to to worry about one sociopathic AI turning unfriendly, because the majority will be able to restrain it.
FAI is about singletons, because the first one to foom wins, is the idea.
ETA: also, rational agents may be ethical in societies, but there’s no advantage to being an ethical singleton.
UFAI is about singletons. If you have an AI society whose members compare notes and share information—which ins isntrumentally useful for them anyway—your reduce the probability of singleton fooming.
Any agent that fooms becomes a singleton. Thus, it doesn’t matter if they acted nice while in a society; all that matters is whether they act nice as a singleton.
I don’t get it: any agent that fooms becomes superintelligent. It’s values don’t necessarily change at all, nor does its connection to its society.
An agent in a society is unable to force its values on the society; it needs to cooperate with the rest of society. A singleton is able to force its values on the rest of society.