By the way I just want to note that expected value isn’t the only option available for aggregating utility functions. There’s also stuff like Bostrom’s parliament idea. I expect there are many opportunities for cross fertilization between AI safety and philosophical work on moral uncertainty.
By the way I just want to note that expected value isn’t the only option available for aggregating utility functions. There’s also stuff like Bostrom’s parliament idea. I expect there are many opportunities for cross fertilization between AI safety and philosophical work on moral uncertainty.