I’m having trouble remembering many times people here say “AI Alignment” in a way that would be best described as “making an AI that builds utopia and stuff”. Maybe Coherent Extrapolated Volition would be close.
My general understanding is that when people here talk about AI Alignment, they are talking about something closer to what you call “making an AI that does what we mean when we say ‘minimize rate of cancer’ (that is, actually curing cancer in a reasonable and non-solar-system-disassembling way)”.
On a somewhat related point, I’d say that “making an AI that does what we mean when we say “minimize rate of cancer” (that is, actually curing cancer in a reasonable and non-solar-system-disassembling way)” is entirely encapsulated under “making an AI that builds utopia and stuff”, as it is very very unlikely an AI makes a utopia while misunderstanding what we intended its goal to be that much.
You would likely enjoy reading through this (short) post: Clarifying inner alignment terminology, and I expect it would help you get a better understanding of what people mean when they are discussing AI Alignment.
by “making an AI that builds utopia and stuff” I mean an AI that would act in such a way that rather than simply obeying the intent of its promptors, it goes and actively improves the world in the optimal way. An AI which has fully worked out Fun Theory and simply goes around filling the universe with pleasure and beauty and freedom and love and complexity in such a way that no other way would be more Fun.
I’m having trouble remembering many times people here say “AI Alignment” in a way that would be best described as “making an AI that builds utopia and stuff”. Maybe Coherent Extrapolated Volition would be close.
My general understanding is that when people here talk about AI Alignment, they are talking about something closer to what you call “making an AI that does what we mean when we say ‘minimize rate of cancer’ (that is, actually curing cancer in a reasonable and non-solar-system-disassembling way)”.
On a somewhat related point, I’d say that “making an AI that does what we mean when we say “minimize rate of cancer” (that is, actually curing cancer in a reasonable and non-solar-system-disassembling way)” is entirely encapsulated under “making an AI that builds utopia and stuff”, as it is very very unlikely an AI makes a utopia while misunderstanding what we intended its goal to be that much.
You would likely enjoy reading through this (short) post: Clarifying inner alignment terminology, and I expect it would help you get a better understanding of what people mean when they are discussing AI Alignment.
Another resource you might enjoy would be reading through the Tag and Subtags around AI: https://www.lesswrong.com/tag/ai
PS: In the future, I’d probably make posts like this in the Open Thread.
by “making an AI that builds utopia and stuff” I mean an AI that would act in such a way that rather than simply obeying the intent of its promptors, it goes and actively improves the world in the optimal way. An AI which has fully worked out Fun Theory and simply goes around filling the universe with pleasure and beauty and freedom and love and complexity in such a way that no other way would be more Fun.
That would be described well by the CEV link above.