Ok. I think that MIRI could communicate more clearly by highlighting this. My previous understanding had been that MIRI staff think that by default, one should expect to need to solve the Lob problem in order to build a Friendly AI. Is there anything in the public domain that would have suggested otherwise to me? If not, I’d suggest writing this up and highlighting it.
AFAIK, the position is still “need to ‘solve’ Lob to get FAI”, where ‘solve’ means find a way to build something that doesn’t have that problem, given that all the obvious formalisms do have such problems. Did EY suggest otherwise?
By default, if you can build a Friendly AI you can solve the Lob problem. That working on the Lob Problem gets you closer to being able to build FAI is neither obvious nor certain, but everything has to start somewhere...
EDIT: Moved the rest of this reply to a new top-level comment because it seemed important and I didn’t want it buried.
Ok. I think that MIRI could communicate more clearly by highlighting this. My previous understanding had been that MIRI staff think that by default, one should expect to need to solve the Lob problem in order to build a Friendly AI. Is there anything in the public domain that would have suggested otherwise to me? If not, I’d suggest writing this up and highlighting it.
AFAIK, the position is still “need to ‘solve’ Lob to get FAI”, where ‘solve’ means find a way to build something that doesn’t have that problem, given that all the obvious formalisms do have such problems. Did EY suggest otherwise?
See my response to EY here.
By default, if you can build a Friendly AI you can solve the Lob problem. That working on the Lob Problem gets you closer to being able to build FAI is neither obvious nor certain, but everything has to start somewhere...
EDIT: Moved the rest of this reply to a new top-level comment because it seemed important and I didn’t want it buried.
http://lesswrong.com/lw/hmt/tiling_agents_for_selfmodifying_ai_opfai_2/#943i