My understanding based on what you say is that the research in your paper is intended to spearhead a field of research, rather than to create something that will be directly used for friendliness in the first AI. Is this right?
If so, our differences are about the sociology of the scientific, technological and political infrastructure rather than about object level considerations having to do with AI.
Sounds about right. You might mean a different thing from “spearhead a field of research” than I do, my phrasing would’ve been “Start working on the goddamned problem.”
From your other comments I suspect that you have a rather different visualization of object-level considerations to do with AI and this is relevant to your disagreement.
Ok. I think that MIRI could communicate more clearly by highlighting this. My previous understanding had been that MIRI staff think that by default, one should expect to need to solve the Lob problem in order to build a Friendly AI. Is there anything in the public domain that would have suggested otherwise to me? If not, I’d suggest writing this up and highlighting it.
AFAIK, the position is still “need to ‘solve’ Lob to get FAI”, where ‘solve’ means find a way to build something that doesn’t have that problem, given that all the obvious formalisms do have such problems. Did EY suggest otherwise?
By default, if you can build a Friendly AI you can solve the Lob problem. That working on the Lob Problem gets you closer to being able to build FAI is neither obvious nor certain, but everything has to start somewhere...
EDIT: Moved the rest of this reply to a new top-level comment because it seemed important and I didn’t want it buried.
Thanks for clarifying your position.
My understanding based on what you say is that the research in your paper is intended to spearhead a field of research, rather than to create something that will be directly used for friendliness in the first AI. Is this right?
If so, our differences are about the sociology of the scientific, technological and political infrastructure rather than about object level considerations having to do with AI.
Sounds about right. You might mean a different thing from “spearhead a field of research” than I do, my phrasing would’ve been “Start working on the goddamned problem.”
From your other comments I suspect that you have a rather different visualization of object-level considerations to do with AI and this is relevant to your disagreement.
Ok. I think that MIRI could communicate more clearly by highlighting this. My previous understanding had been that MIRI staff think that by default, one should expect to need to solve the Lob problem in order to build a Friendly AI. Is there anything in the public domain that would have suggested otherwise to me? If not, I’d suggest writing this up and highlighting it.
AFAIK, the position is still “need to ‘solve’ Lob to get FAI”, where ‘solve’ means find a way to build something that doesn’t have that problem, given that all the obvious formalisms do have such problems. Did EY suggest otherwise?
See my response to EY here.
By default, if you can build a Friendly AI you can solve the Lob problem. That working on the Lob Problem gets you closer to being able to build FAI is neither obvious nor certain, but everything has to start somewhere...
EDIT: Moved the rest of this reply to a new top-level comment because it seemed important and I didn’t want it buried.
http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/#943i