Based on Eliezer’s recent comments, my impression is that Eliezer is not making such a case, and is rather making a case for the paper being of sociological/motivational value.
No, that’s not what I’ve been saying at all.
I’m sorry if this seems rude in some sense, but I need to inquire after your domain knowledge at this point. What is your level of mathematical literacy and do you have any previous acquaintance with AI problems? It may be that, if we’re to proceed on this disagreement, MIRI should try to get an eminent authority in the field to briefly confirm basic, widespread, and correct ideas about the relevance of doing math to AI, rather than us trying to convince you of that via object-level arguments that might not be making any sense to you.
By ‘the relevance of math to AI’ I don’t mean mathematical logic, I mean the relevance of trying to reduce an intuitive concept to a crisp form. In this case, like it says in the paper and like it says in the LW post, FOL is being used not because it’s an appropriate representational fit to the environment… though as I write this, I realize that may sound like random jargon on your end… but because FOL has a lot of standard machinery for self-reflection of which we could then take advantage, like the notion of Godel numbering or ZF proving that every model entails every tautology… which probably doesn’t mean anything to you either. But then I’m not sure how to proceed; if something can’t be settled by object-level arguments then we probably have to find an authority trusted by you, who knows about the (straightforward, common) idea of ‘crispness is relevant to AI’ and can quickly skim the paper and confirm ‘this work crispifies something about self-modification that wasn’t as crisp before’ and testify that to you. This sounds like a fair bit of work, but I expect we’ll be trying to get some large names to skim the paper anyway, albeit possibly not the Early Draft for that.
I need to inquire after your domain knowledge at this point. What is your level of mathematical literacy and do you have any previous acquaintance with AI problems?
Quick Googling suggest someone named “Jonah Sinick” is a mathematician in number theory. It appears to be the same person.
I really wish Jonah had mentioned that some number of comments ago, there’s a lot of arguments I don’t even try to use unless I know I’m talking to a mathematical literati.
What is your level of mathematical literacy and do you have any previous acquaintance with AI problems?
I have a PhD in pure math, I know the basic theory of computation and of computational complexity, but I don’t have deep knowledge of these domains, and I have no acquaintance with AI problems.
It may be that, if we’re to proceed on this disagreement, MIRI should try to get an eminent authority in the field to briefly confirm basic, widespread, and correct ideas about the relevance of doing math to AI, rather than us trying to convince you of that via object-level arguments that might not be making any sense to you.
Yes, this could be what’s most efficient. But my sense is that our disagreement is at a non-technical level rather than at a technical level.
My interpretation of
The paper is meant to be interpreted within an agenda of “Begin tackling the conceptual challenge of describing a stably self-reproducing decision criterion by inventing a simple formalism and confronting a crisp difficulty”; not as “We think this Godelian difficulty will block AI”, nor “This formalism would be good for an actual AI”, nor “A bounded probabilistic self-modifying agent would be like this, only scaled up and with some probabilistic and bounded parts tacked on”.
was that you were asserting only very weak confidence in the relevance the paper to AI safety, and that you were saying “Our purpose in writing this was to do something that could conceivably have something to do with AI safety, so that people take notice and start doing more work on AI safety.” Thinking it over, I realize that you might have meant “We believe that this paper is an important first step on a technical level. Can you clarify here?
If the latter interpretation is right, I’d recur to my question about why the operationalization is a good one, which I feel that you still haven’t addressed, and which I see as crucial.
No, that’s not what I’ve been saying at all.
I’m sorry if this seems rude in some sense, but I need to inquire after your domain knowledge at this point. What is your level of mathematical literacy and do you have any previous acquaintance with AI problems? It may be that, if we’re to proceed on this disagreement, MIRI should try to get an eminent authority in the field to briefly confirm basic, widespread, and correct ideas about the relevance of doing math to AI, rather than us trying to convince you of that via object-level arguments that might not be making any sense to you.
By ‘the relevance of math to AI’ I don’t mean mathematical logic, I mean the relevance of trying to reduce an intuitive concept to a crisp form. In this case, like it says in the paper and like it says in the LW post, FOL is being used not because it’s an appropriate representational fit to the environment… though as I write this, I realize that may sound like random jargon on your end… but because FOL has a lot of standard machinery for self-reflection of which we could then take advantage, like the notion of Godel numbering or ZF proving that every model entails every tautology… which probably doesn’t mean anything to you either. But then I’m not sure how to proceed; if something can’t be settled by object-level arguments then we probably have to find an authority trusted by you, who knows about the (straightforward, common) idea of ‘crispness is relevant to AI’ and can quickly skim the paper and confirm ‘this work crispifies something about self-modification that wasn’t as crisp before’ and testify that to you. This sounds like a fair bit of work, but I expect we’ll be trying to get some large names to skim the paper anyway, albeit possibly not the Early Draft for that.
Quick Googling suggest someone named “Jonah Sinick” is a mathematician in number theory. It appears to be the same person.
I really wish Jonah had mentioned that some number of comments ago, there’s a lot of arguments I don’t even try to use unless I know I’m talking to a mathematical literati.
It’s mentioned explicitly at the beginning of his post Mathematicians and the Prevention of Recessions, strongly implied in The Paucity of Elites Online, and the website listed under his username and karma score is http://www.mathisbeauty.org.
Ok, I look forward to better understanding :-)
I have a PhD in pure math, I know the basic theory of computation and of computational complexity, but I don’t have deep knowledge of these domains, and I have no acquaintance with AI problems.
Yes, this could be what’s most efficient. But my sense is that our disagreement is at a non-technical level rather than at a technical level.
My interpretation of
was that you were asserting only very weak confidence in the relevance the paper to AI safety, and that you were saying “Our purpose in writing this was to do something that could conceivably have something to do with AI safety, so that people take notice and start doing more work on AI safety.” Thinking it over, I realize that you might have meant “We believe that this paper is an important first step on a technical level. Can you clarify here?
If the latter interpretation is right, I’d recur to my question about why the operationalization is a good one, which I feel that you still haven’t addressed, and which I see as crucial.