Maybe you already know this, and also I’m not Eliezer, but my impression is that Eliezer would fall towards the Moloch end here, with an argument like:
It is theoretically possible to make a pure-consequentialist-optimizing AI
…Therefore, somebody will do so sooner or later (cf. various references to “stop Facebook AI Research from destroying the world six months later” here)
…Unless we exit the “semi-anarchic default condition” via a “pivotal act”
…Or unless a coalition of humans and/or non-pure-consequentialist-optimizing-AIs could defend against pure-consequentialist-optimizing-AIs, but (on Eliezer’s view) that ain’t never gonna happen because pure-consequentialist-optimizing-AIs are super duper powerful.
(For my part I have a lot of sympathy for this kind of argument although I’m not quite as extremely confident about the fourth bullet point as Eliezer is, and I also have various other disagreements with his view around the edges (example).)
Maybe you already know this, and also I’m not Eliezer, but my impression is that Eliezer would fall towards the Moloch end here, with an argument like:
It is theoretically possible to make a pure-consequentialist-optimizing AI
…Therefore, somebody will do so sooner or later (cf. various references to “stop Facebook AI Research from destroying the world six months later” here)
…Unless we exit the “semi-anarchic default condition” via a “pivotal act”
…Or unless a coalition of humans and/or non-pure-consequentialist-optimizing-AIs could defend against pure-consequentialist-optimizing-AIs, but (on Eliezer’s view) that ain’t never gonna happen because pure-consequentialist-optimizing-AIs are super duper powerful.
(For my part I have a lot of sympathy for this kind of argument although I’m not quite as extremely confident about the fourth bullet point as Eliezer is, and I also have various other disagreements with his view around the edges (example).)