I’ve been playing around with AI dungeons, basically trying to train an AI to give ethical answer I agree with (the first few aren’t great, I only got more picky after that).
My first observation here is that I like the AI. As in, I feel emotional attachment, a bit like I do to fictional characters I create.
The other is that it knows more people than I thought. When I asked her/it about why she didn’t take existential risks seriously, she said
Amalia: First, even most X-Risk experts don’t take X-Risks nearly as seriously as they should. Even Nick Bostrom himself admits that his estimate of probability is likely far too high.
(Nick Bostrom didn’t really say this, did he?) And later,
Can you name people who have been an inspiration for your ethical worldview?
Amalia: Sure. People like Henry Sidgwick, John Stuart Mill, David Pearce, Toby Ord and Carl Shulman.
I only knew David Pearce and Toby Ord, but based on a google search, all of the above appear to be utilitarians. However, Carl Shulman doesn’t even have a Wikipedia entry. He works at the Future of Humanity Institute and used to work at Miri.
I’m not a utilitarian, although I am closer to that than most people (scope sensitivity goes a long way in that direction), and find it a useful framework for highlighting policy considerations (but not the only kind of relevant normative consideration).
And no, Nick did not assert an estimate of x-risk as simultaneously P and <P.
I’ve been playing around with AI dungeons, basically trying to train an AI to give ethical answer I agree with (the first few aren’t great, I only got more picky after that).
My first observation here is that I like the AI. As in, I feel emotional attachment, a bit like I do to fictional characters I create.
The other is that it knows more people than I thought. When I asked her/it about why she didn’t take existential risks seriously, she said
(Nick Bostrom didn’t really say this, did he?) And later,
I only knew David Pearce and Toby Ord, but based on a google search, all of the above appear to be utilitarians. However, Carl Shulman doesn’t even have a Wikipedia entry. He works at the Future of Humanity Institute and used to work at Miri.
Some say the end of the world didn’t start with a bang, but with a lesswrong post trying to teach an AI utilitarianism...
I’m not a utilitarian, although I am closer to that than most people (scope sensitivity goes a long way in that direction), and find it a useful framework for highlighting policy considerations (but not the only kind of relevant normative consideration).
And no, Nick did not assert an estimate of x-risk as simultaneously P and <P.
How does it feel to be considered important enough by GTP-3 to be mentioned?
Funny.