Note: I think there’s a bunch of additional reasons for doom, surrounding “civilizational adequacy / organizational competence / societal dynamics”. Eliezer briefly alluded to these, but AFAICT he’s mostly focused on lethality that comes “early”, and then didn’t address them much. My model of Andrew Critch has a bunch of concerns about doom that show up later, because there’s a bunch of additional challenges you have to solve if AI doesn’t dramatically win/lose early on (i.e. multi/multi dynamics and how they spiral out of control)
I know a bunch of people whose hope funnels through “We’ll be able to carefully iterate on slightly-smarter-than-human-intelligences, build schemes to play them against each other, leverage them to make some progress on alignment that we can use to build slightly-more-advanced-safer-systems”. (Let’s call this the “Careful Bootstrap plan”)
I do actually feel nonzero optimism about that plan, but when I talk to people who are optimistic about that I feel a missing mood about the kind of difficulty that is involved here.
I’ll attempt to write up some concrete things here later, but wanted to note this for now.
I agree with this line of thought regarding iterative developments of proto-AGI via careful bootstrapping. Humans will be inadequate for monitoring progress of skills. Hopefully, we’ll have a slew of diagnostic of narrow minded neural networks whose sole purpose is to tease out relevant details of the proto-super human intellect. What I can’t wrap my head around is whether super (or sub) human level intelligence requires consciousness. If consciousness is required, then is the world worse or better for it? Is an agent with the rich experience of fears, hopes, joys more or less likely to be built? Do reward functions reliably grow into feelings, which lead to emotional experiences? If they do, then perhaps an evolving intelligence wouldn’t always be as alien as we currently imagine it.
Note: I think there’s a bunch of additional reasons for doom, surrounding “civilizational adequacy / organizational competence / societal dynamics”. Eliezer briefly alluded to these, but AFAICT he’s mostly focused on lethality that comes “early”, and then didn’t address them much. My model of Andrew Critch has a bunch of concerns about doom that show up later, because there’s a bunch of additional challenges you have to solve if AI doesn’t dramatically win/lose early on (i.e. multi/multi dynamics and how they spiral out of control)
I know a bunch of people whose hope funnels through “We’ll be able to carefully iterate on slightly-smarter-than-human-intelligences, build schemes to play them against each other, leverage them to make some progress on alignment that we can use to build slightly-more-advanced-safer-systems”. (Let’s call this the “Careful Bootstrap plan”)
I do actually feel nonzero optimism about that plan, but when I talk to people who are optimistic about that I feel a missing mood about the kind of difficulty that is involved here.
I’ll attempt to write up some concrete things here later, but wanted to note this for now.
I agree with this line of thought regarding iterative developments of proto-AGI via careful bootstrapping. Humans will be inadequate for monitoring progress of skills. Hopefully, we’ll have a slew of diagnostic of narrow minded neural networks whose sole purpose is to tease out relevant details of the proto-super human intellect. What I can’t wrap my head around is whether super (or sub) human level intelligence requires consciousness. If consciousness is required, then is the world worse or better for it? Is an agent with the rich experience of fears, hopes, joys more or less likely to be built? Do reward functions reliably grow into feelings, which lead to emotional experiences? If they do, then perhaps an evolving intelligence wouldn’t always be as alien as we currently imagine it.