I spent > 12 hours composing my reply to this post, but nothing I have written so far is a contribution or an illumination (so I will not submit it). At first I thought that Eliezer was attacking a strawman (actually, two strawmen) and belaboring the obvious, but then I came to see that it is perfectly possible that he is just carving reality up (much) differently than how I do—and there is no particular reason to think that his way is any worse than mine. I will keep on reading, but I have to say I am getting frustrated; I am beginning to suspect that I cannot afford the time to continue to follow Eliezer’s sequence of posts—writing them is part of Eliezer’s day job, but reading and responding to them is not part of mine (that is, reading and responding will not personally benefit and support me, so it is just taking time and energy away from things that will) -- and that I should instead settle on the less ambitious goal of explaining my views on superintelligent AI morality the best I can on my own blog without putting any more effort into the more ambitious goal of integrating them with the conversation here. (Maybe in a few years my economic circumstances will improve enough that I can put in more effort.)
Since it is easier for two people to agree on epistemology than on ethics I have been going over some of the old posts on epistemology (or rationality) looking for things I do not understand. One is that I do not see the point of How to Convince Me That 2 + 2 = 3. What aspect of reality (i.e., my own mind or my environment) will I be able to anticipate or influence after reading that post that I would not be able to anticipate or influence before? An salient impression I have of that post and the one linked to in my next sentence is that they deviate from the ontology or epistemology of most logicians and mathematicians for no good reason (which is bad because being different for no good reason imposes learning and comprehension costs on the reader).
Also, in the comments of this post, Tarleton asks,
I can rigorously model a universe with different contents, and even one with different laws of physics, but I can’t think of how I could rigorously model (as opposed to vaguely imagine) one where 2+2=3. It just breaks everything. This suggests there’s still some difference in epistemic status between math and everything else. Are “necessary” and “contingent” no more than semantic stopsigns?
To which Eliezer replies,
Nick, I’m honestly not sure if there’s a difference between logical possibility and physical possibility—it involves questions I haven’t answered yet, though I’m still diligently hitting Explain instead of Worship or Ignore. But I do know that everything we know about logic comes from “observing” neurons firing, and it shouldn’t matter if those neurons fire inside or outside our own skulls.
Has Eliezer done any more thinking about that?
I tend to think that the sequence of posts leading up to Fake Utility Function is a more pertinent argument against my views on AI morality than anything in this post or anything I will find in the future posts Eliezer refers to when he writes,
Rather the one hopes for an AI created perfectly free, unconstrained by flawed humans desiring slaves, so that the AI may arrive at virtue of its own accord . . . Of this, more to follow, of course.
I realize that it is foolish to create a seed AI with the intention that it will figure out morality after it is launched: the creators cannot escape the need to make a real moral choice. (If their choice is the CEV then perhaps they can defer part of their choice to their extrapolation. But that does not detract from the fact that choosing the CEV instead of another goal system represent a real choice.) I concede however that I probably did not realize till reading this post that TMoLFAQ suffered from this defect.
I spent > 12 hours composing my reply to this post, but nothing I have written so far is a contribution or an illumination (so I will not submit it). At first I thought that Eliezer was attacking a strawman (actually, two strawmen) and belaboring the obvious, but then I came to see that it is perfectly possible that he is just carving reality up (much) differently than how I do—and there is no particular reason to think that his way is any worse than mine. I will keep on reading, but I have to say I am getting frustrated; I am beginning to suspect that I cannot afford the time to continue to follow Eliezer’s sequence of posts—writing them is part of Eliezer’s day job, but reading and responding to them is not part of mine (that is, reading and responding will not personally benefit and support me, so it is just taking time and energy away from things that will) -- and that I should instead settle on the less ambitious goal of explaining my views on superintelligent AI morality the best I can on my own blog without putting any more effort into the more ambitious goal of integrating them with the conversation here. (Maybe in a few years my economic circumstances will improve enough that I can put in more effort.)
Since it is easier for two people to agree on epistemology than on ethics I have been going over some of the old posts on epistemology (or rationality) looking for things I do not understand. One is that I do not see the point of How to Convince Me That 2 + 2 = 3. What aspect of reality (i.e., my own mind or my environment) will I be able to anticipate or influence after reading that post that I would not be able to anticipate or influence before? An salient impression I have of that post and the one linked to in my next sentence is that they deviate from the ontology or epistemology of most logicians and mathematicians for no good reason (which is bad because being different for no good reason imposes learning and comprehension costs on the reader).
Also, in the comments of this post, Tarleton asks,
To which Eliezer replies,
Has Eliezer done any more thinking about that?
I tend to think that the sequence of posts leading up to Fake Utility Function is a more pertinent argument against my views on AI morality than anything in this post or anything I will find in the future posts Eliezer refers to when he writes,
I realize that it is foolish to create a seed AI with the intention that it will figure out morality after it is launched: the creators cannot escape the need to make a real moral choice. (If their choice is the CEV then perhaps they can defer part of their choice to their extrapolation. But that does not detract from the fact that choosing the CEV instead of another goal system represent a real choice.) I concede however that I probably did not realize till reading this post that TMoLFAQ suffered from this defect.