To briefly hop in and say something that may be useful: I had a reaction pretty similar to what Eliezer commented, and I don’t see continuity or “Things will be weird before getting extremely weird” as a crux. (I don’t know why you think he does, and don’t know what he thinks, but would guess he doesn’t think it’s a crux either)
I’ve been part or read enough debates with Eliezer to have some guesses how the argument would go, so I made the move of skipping several steps of double-crux to the area where I suspect actual cruxes lie.
I think exploring the whole debate-tree or argument map would be quite long, so I’ll just try to gesture at how some of these things are connected, in my map.
- pivotal acts vs. pivotal processes —my take is people’s stance on feasibility of pivotal acts vs. processes partially depends on continuity assumptions—what do you believe about pivotal acts?
- assuming continuity, do you expect existing non-human agents to move important parts of their cognition to AI substrates? -- if yes, do you expect large-scale regulations around that? --- if yes, will it be also partially automated?
- different route: assuming continuity, do you expect a lot of alignment work to be done partially by AI systems, inside places like OpenAI? -- if at the same time this is a huge topic for the whole society, academia and politics, would you expect the rest of the world not trying to influence this?
- different route: assuming continuity, do you expect a lot of “how different entities in the world coordinate” to be done partially by AI systems? -- if yes, do you assume technical features of the system matter? like, eg., multi-agent deliberation dynamics?
- assuming the world notices AI safety as problem (it did much more since writing this post) -- do you expect large amount of attention and resources of academia and industry will be spent on AI alignment? --- would you expect this will be somehow related to the technical problems and how we understand them? --- eg do you think it makes no difference to the technical problem if 300 or 30k people work on it? ---- if it makes a difference, does it make a difference how is the attention allocated?
Not sure if the doublecrux between us would rest on the same cruxes, but I’m happy to try :)
To briefly hop in and say something that may be useful: I had a reaction pretty similar to what Eliezer commented, and I don’t see continuity or “Things will be weird before getting extremely weird” as a crux. (I don’t know why you think he does, and don’t know what he thinks, but would guess he doesn’t think it’s a crux either)
I’ve been part or read enough debates with Eliezer to have some guesses how the argument would go, so I made the move of skipping several steps of double-crux to the area where I suspect actual cruxes lie.
I think exploring the whole debate-tree or argument map would be quite long, so I’ll just try to gesture at how some of these things are connected, in my map.
- pivotal acts vs. pivotal processes
—my take is people’s stance on feasibility of pivotal acts vs. processes partially depends on continuity assumptions—what do you believe about pivotal acts?
- assuming continuity, do you expect existing non-human agents to move important parts of their cognition to AI substrates?
-- if yes, do you expect large-scale regulations around that?
--- if yes, will it be also partially automated?
- different route: assuming continuity, do you expect a lot of alignment work to be done partially by AI systems, inside places like OpenAI?
-- if at the same time this is a huge topic for the whole society, academia and politics, would you expect the rest of the world not trying to influence this?
- different route: assuming continuity, do you expect a lot of “how different entities in the world coordinate” to be done partially by AI systems?
-- if yes, do you assume technical features of the system matter? like, eg., multi-agent deliberation dynamics?
- assuming the world notices AI safety as problem (it did much more since writing this post)
-- do you expect large amount of attention and resources of academia and industry will be spent on AI alignment?
--- would you expect this will be somehow related to the technical problems and how we understand them?
--- eg do you think it makes no difference to the technical problem if 300 or 30k people work on it?
---- if it makes a difference, does it make a difference how is the attention allocated?
Not sure if the doublecrux between us would rest on the same cruxes, but I’m happy to try :)