What is your level of mathematical literacy and do you have any previous acquaintance with AI problems?
I have a PhD in pure math, I know the basic theory of computation and of computational complexity, but I don’t have deep knowledge of these domains, and I have no acquaintance with AI problems.
It may be that, if we’re to proceed on this disagreement, MIRI should try to get an eminent authority in the field to briefly confirm basic, widespread, and correct ideas about the relevance of doing math to AI, rather than us trying to convince you of that via object-level arguments that might not be making any sense to you.
Yes, this could be what’s most efficient. But my sense is that our disagreement is at a non-technical level rather than at a technical level.
My interpretation of
The paper is meant to be interpreted within an agenda of “Begin tackling the conceptual challenge of describing a stably self-reproducing decision criterion by inventing a simple formalism and confronting a crisp difficulty”; not as “We think this Godelian difficulty will block AI”, nor “This formalism would be good for an actual AI”, nor “A bounded probabilistic self-modifying agent would be like this, only scaled up and with some probabilistic and bounded parts tacked on”.
was that you were asserting only very weak confidence in the relevance the paper to AI safety, and that you were saying “Our purpose in writing this was to do something that could conceivably have something to do with AI safety, so that people take notice and start doing more work on AI safety.” Thinking it over, I realize that you might have meant “We believe that this paper is an important first step on a technical level. Can you clarify here?
If the latter interpretation is right, I’d recur to my question about why the operationalization is a good one, which I feel that you still haven’t addressed, and which I see as crucial.
Ok, I look forward to better understanding :-)
I have a PhD in pure math, I know the basic theory of computation and of computational complexity, but I don’t have deep knowledge of these domains, and I have no acquaintance with AI problems.
Yes, this could be what’s most efficient. But my sense is that our disagreement is at a non-technical level rather than at a technical level.
My interpretation of
was that you were asserting only very weak confidence in the relevance the paper to AI safety, and that you were saying “Our purpose in writing this was to do something that could conceivably have something to do with AI safety, so that people take notice and start doing more work on AI safety.” Thinking it over, I realize that you might have meant “We believe that this paper is an important first step on a technical level. Can you clarify here?
If the latter interpretation is right, I’d recur to my question about why the operationalization is a good one, which I feel that you still haven’t addressed, and which I see as crucial.