BTW, I spent a large fraction of the first few months of 2013 weighing FAI research vs. other options before arriving at MIRI’s 2013 strategy (which focuses heavily on FAI research). So it’s not as though I think FAI research is obviously the superior path, and it’s also not as though we haven’t thought through all these different options, and gotten feedback from dozens of people about those options, and so on.
My comments were addressed at Eliezer’s paper specifically, rather than MIRI’s general strategy, or your own views.
Also note that MIRI did, in fact, decide to focus on (1) spreading rationality, and (2) building a community of people who care about rationality, the far future, and x-risk, before turning its head to FAI research: see (in chronological order) the Singularity Summit, Less Wrong and CFAR.
Sure – what I’m thinking about is cost-effectiveness at the margin.
Before jumping over to that topic, I wonder: do you now largely accept the case Eliezer made for this latest paper as an important first step on an important sub-problem of the Friendly AI problem? And if not, why not?
Based on Eliezer’s recentcomments, my impression is that Eliezer is not making such a case, and is rather making a case for the paper being of sociological/motivational value. Is your understanding different?
Based on Eliezer’s recent comments, my impression is that Eliezer is not making such a case, and is rather making a case for the paper being of sociological/motivational value.
No, that’s not what I’ve been saying at all.
I’m sorry if this seems rude in some sense, but I need to inquire after your domain knowledge at this point. What is your level of mathematical literacy and do you have any previous acquaintance with AI problems? It may be that, if we’re to proceed on this disagreement, MIRI should try to get an eminent authority in the field to briefly confirm basic, widespread, and correct ideas about the relevance of doing math to AI, rather than us trying to convince you of that via object-level arguments that might not be making any sense to you.
By ‘the relevance of math to AI’ I don’t mean mathematical logic, I mean the relevance of trying to reduce an intuitive concept to a crisp form. In this case, like it says in the paper and like it says in the LW post, FOL is being used not because it’s an appropriate representational fit to the environment… though as I write this, I realize that may sound like random jargon on your end… but because FOL has a lot of standard machinery for self-reflection of which we could then take advantage, like the notion of Godel numbering or ZF proving that every model entails every tautology… which probably doesn’t mean anything to you either. But then I’m not sure how to proceed; if something can’t be settled by object-level arguments then we probably have to find an authority trusted by you, who knows about the (straightforward, common) idea of ‘crispness is relevant to AI’ and can quickly skim the paper and confirm ‘this work crispifies something about self-modification that wasn’t as crisp before’ and testify that to you. This sounds like a fair bit of work, but I expect we’ll be trying to get some large names to skim the paper anyway, albeit possibly not the Early Draft for that.
I need to inquire after your domain knowledge at this point. What is your level of mathematical literacy and do you have any previous acquaintance with AI problems?
Quick Googling suggest someone named “Jonah Sinick” is a mathematician in number theory. It appears to be the same person.
I really wish Jonah had mentioned that some number of comments ago, there’s a lot of arguments I don’t even try to use unless I know I’m talking to a mathematical literati.
What is your level of mathematical literacy and do you have any previous acquaintance with AI problems?
I have a PhD in pure math, I know the basic theory of computation and of computational complexity, but I don’t have deep knowledge of these domains, and I have no acquaintance with AI problems.
It may be that, if we’re to proceed on this disagreement, MIRI should try to get an eminent authority in the field to briefly confirm basic, widespread, and correct ideas about the relevance of doing math to AI, rather than us trying to convince you of that via object-level arguments that might not be making any sense to you.
Yes, this could be what’s most efficient. But my sense is that our disagreement is at a non-technical level rather than at a technical level.
My interpretation of
The paper is meant to be interpreted within an agenda of “Begin tackling the conceptual challenge of describing a stably self-reproducing decision criterion by inventing a simple formalism and confronting a crisp difficulty”; not as “We think this Godelian difficulty will block AI”, nor “This formalism would be good for an actual AI”, nor “A bounded probabilistic self-modifying agent would be like this, only scaled up and with some probabilistic and bounded parts tacked on”.
was that you were asserting only very weak confidence in the relevance the paper to AI safety, and that you were saying “Our purpose in writing this was to do something that could conceivably have something to do with AI safety, so that people take notice and start doing more work on AI safety.” Thinking it over, I realize that you might have meant “We believe that this paper is an important first step on a technical level. Can you clarify here?
If the latter interpretation is right, I’d recur to my question about why the operationalization is a good one, which I feel that you still haven’t addressed, and which I see as crucial.
Begin tackling the conceptual challenge of describing a stably self-reproducing decision criterion by inventing a simple formalism and confronting a crisp difficulty
is cost-effective relative to other options on the table?
...
BTW, I spent a large fraction of the first few months of 2013 weighing FAI research vs. other options before arriving at MIRI’s 2013 strategy (which focuses heavily on FAI research). So it’s not as though I think FAI research is obviously the superior path, and it’s also not as though we haven’t thought through all these different options, and gotten feedback from dozens of people about those options, and so on.
My comments were addressed at Eliezer’s paper specifically, rather than MIRI’s general strategy, or your own views.
Do you not see that what Luke wrote was a direct response to your question?
There are really two parts to the justification for working on the this paper: 1) Direct FAI research is a good thing to do now. 2) This is a good problem to work on within FAI research. Luke’s comment gives context explaining why MIRI is focusing on direct FAI research, in support of 1. And it’s clear from what you list as other options that you weren’t asking about 2.
It sounds like what you want is for this problem to be compared on its own to every other possible intervention. In theory that would be the rational thing to do to ensure you were always doing the most cost-effective work on the margin. But that only makes sense if it’s computationally practical to do that evaluation at every step.
What MIRI has chosen to do instead is to invest some time up front coming up with a strategic plan, and then follow through on that. This seem entirely reasonable to me.
If the probability is too small, then it isn’t worth it. The activities that I mention plausibly reduce astronomical waste to a nontrivial degree. Arguing that you can do better than them requires an argument that establishes the expected impact of MIRI Friendly AI research on AI safety above a nontrivial threshold.
Do you not see that what Luke wrote was a direct response to your question?
Which question?
Luke’s comment gives context explaining why MIRI is focusing on direct FAI research, in support of 1.
Sure, I acknowledge this.
It sounds like what you want is for this problem to be compared on its own to every other possible intervention. In theory that would be the rational thing to do to ensure you were always doing the most cost-effective work on the margin. But that only makes sense if it’s computationally practical to do that evaluation at every step.
I don’t think that it’s computationally intractable to come up with better alternatives. Indeed, I think that there are a number of concrete alternatives that are better.
What MIRI has chosen to do instead is to invest some time up front coming up with a strategic plan, and then follow through on that. This seem entirely reasonable to me.
I wasn’t disputing this. I was questioning the relevance of MIRI’s current research to AI safety, not saying that MIRI’s decision process is unreasonable.
The one I quoted: “Why do you think that … is cost-effective relative to other options on the table?”
Yes, you have a valid question about whether this Lob problem is relevant to AI safety.
What I found frustrating as a reader was that you asked why Eliezer was focusing on this problem as opposed to other options such as spreading rationality, building human capital, etc. Then when Luke responded with an explanation that MIRI had chosen to focus on FAI research, rather than those other types of work, you say, no I’m not asking about MIRI’s strategy or Luke’s views, I’m asking about this paper. But the reason Eliezer is working on this paper is because of MIRI’s strategy!
So that just struck me as sort of rude and/or missing the point of what Luke was trying to tell you. My apologies if I’ve been unnecessarily uncharitable in interpreting your comments.
What I found frustrating as a reader was that you asked why Eliezer was focusing on this problem as opposed to other options such as spreading rationality, building human capital, etc. Then when Luke responded with an explanation that MIRI had chosen to focus on FAI research, rather than those other types of work, you say, no I’m not asking about MIRI’s strategy or Luke’s views, I’m asking about this paper. But the reason Eliezer is working on this paper is because of MIRI’s strategy!
I read Luke’s comment differently, based on the preliminary “BTW.” My interpretation was that his purpose in making thecomment was to give a tangentially related contextual remark rather than to answer my question. (I wasn’t at all bothered by this – I’m just explaining why I didn’t respond to it as if it were intended to address my question.)
The way I’m using these words, my “this latest paper as an important first step on an important sub-problem of the Friendly AI problem” is equivalent to Eliezer’s “begin tackling the conceptual challenge of describing a stably self-reproducing decision criterion by inventing a simple formalism and confronting a crisp difficulty.”
Ok. I disagree that the paper is an important first step.
Because Eliezer is making an appeal based on psychological and sociological considerations, spelling out my reasoning requires discussion of what sorts of efforts are likely to impact the scientific community, and whether one can expect such research to occur by default. Discussing these requires discussion of psychology, sociology and economics, partly as related to whether the world’s elites will navigate the creation of AI just fine.
I look forward to it! Our models of how the scientific community works may be substantially different. To consider just one particularly relevant example, consider what the field of machine ethics looks like without the Yudkowskian line.
I agree that Eliezer has substantially altered the field of machine ethics. My view here is very much contingent on the belief that elites will navigate the creation of AI just fine, which, if true, is highly nonobvious.
My comments were addressed at Eliezer’s paper specifically, rather than MIRI’s general strategy, or your own views.
Sure – what I’m thinking about is cost-effectiveness at the margin.
Based on Eliezer’s recent comments, my impression is that Eliezer is not making such a case, and is rather making a case for the paper being of sociological/motivational value. Is your understanding different?
No, that’s not what I’ve been saying at all.
I’m sorry if this seems rude in some sense, but I need to inquire after your domain knowledge at this point. What is your level of mathematical literacy and do you have any previous acquaintance with AI problems? It may be that, if we’re to proceed on this disagreement, MIRI should try to get an eminent authority in the field to briefly confirm basic, widespread, and correct ideas about the relevance of doing math to AI, rather than us trying to convince you of that via object-level arguments that might not be making any sense to you.
By ‘the relevance of math to AI’ I don’t mean mathematical logic, I mean the relevance of trying to reduce an intuitive concept to a crisp form. In this case, like it says in the paper and like it says in the LW post, FOL is being used not because it’s an appropriate representational fit to the environment… though as I write this, I realize that may sound like random jargon on your end… but because FOL has a lot of standard machinery for self-reflection of which we could then take advantage, like the notion of Godel numbering or ZF proving that every model entails every tautology… which probably doesn’t mean anything to you either. But then I’m not sure how to proceed; if something can’t be settled by object-level arguments then we probably have to find an authority trusted by you, who knows about the (straightforward, common) idea of ‘crispness is relevant to AI’ and can quickly skim the paper and confirm ‘this work crispifies something about self-modification that wasn’t as crisp before’ and testify that to you. This sounds like a fair bit of work, but I expect we’ll be trying to get some large names to skim the paper anyway, albeit possibly not the Early Draft for that.
Quick Googling suggest someone named “Jonah Sinick” is a mathematician in number theory. It appears to be the same person.
I really wish Jonah had mentioned that some number of comments ago, there’s a lot of arguments I don’t even try to use unless I know I’m talking to a mathematical literati.
It’s mentioned explicitly at the beginning of his post Mathematicians and the Prevention of Recessions, strongly implied in The Paucity of Elites Online, and the website listed under his username and karma score is http://www.mathisbeauty.org.
Ok, I look forward to better understanding :-)
I have a PhD in pure math, I know the basic theory of computation and of computational complexity, but I don’t have deep knowledge of these domains, and I have no acquaintance with AI problems.
Yes, this could be what’s most efficient. But my sense is that our disagreement is at a non-technical level rather than at a technical level.
My interpretation of
was that you were asserting only very weak confidence in the relevance the paper to AI safety, and that you were saying “Our purpose in writing this was to do something that could conceivably have something to do with AI safety, so that people take notice and start doing more work on AI safety.” Thinking it over, I realize that you might have meant “We believe that this paper is an important first step on a technical level. Can you clarify here?
If the latter interpretation is right, I’d recur to my question about why the operationalization is a good one, which I feel that you still haven’t addressed, and which I see as crucial.
...
Do you not see that what Luke wrote was a direct response to your question?
There are really two parts to the justification for working on the this paper: 1) Direct FAI research is a good thing to do now. 2) This is a good problem to work on within FAI research. Luke’s comment gives context explaining why MIRI is focusing on direct FAI research, in support of 1. And it’s clear from what you list as other options that you weren’t asking about 2.
It sounds like what you want is for this problem to be compared on its own to every other possible intervention. In theory that would be the rational thing to do to ensure you were always doing the most cost-effective work on the margin. But that only makes sense if it’s computationally practical to do that evaluation at every step.
What MIRI has chosen to do instead is to invest some time up front coming up with a strategic plan, and then follow through on that. This seem entirely reasonable to me.
If the probability is too small, then it isn’t worth it. The activities that I mention plausibly reduce astronomical waste to a nontrivial degree. Arguing that you can do better than them requires an argument that establishes the expected impact of MIRI Friendly AI research on AI safety above a nontrivial threshold.
Which question?
Sure, I acknowledge this.
I don’t think that it’s computationally intractable to come up with better alternatives. Indeed, I think that there are a number of concrete alternatives that are better.
I wasn’t disputing this. I was questioning the relevance of MIRI’s current research to AI safety, not saying that MIRI’s decision process is unreasonable.
The one I quoted: “Why do you think that … is cost-effective relative to other options on the table?”
Yes, you have a valid question about whether this Lob problem is relevant to AI safety.
What I found frustrating as a reader was that you asked why Eliezer was focusing on this problem as opposed to other options such as spreading rationality, building human capital, etc. Then when Luke responded with an explanation that MIRI had chosen to focus on FAI research, rather than those other types of work, you say, no I’m not asking about MIRI’s strategy or Luke’s views, I’m asking about this paper. But the reason Eliezer is working on this paper is because of MIRI’s strategy!
So that just struck me as sort of rude and/or missing the point of what Luke was trying to tell you. My apologies if I’ve been unnecessarily uncharitable in interpreting your comments.
I read Luke’s comment differently, based on the preliminary “BTW.” My interpretation was that his purpose in making thecomment was to give a tangentially related contextual remark rather than to answer my question. (I wasn’t at all bothered by this – I’m just explaining why I didn’t respond to it as if it were intended to address my question.)
Ah, thanks for the clarification.
The way I’m using these words, my “this latest paper as an important first step on an important sub-problem of the Friendly AI problem” is equivalent to Eliezer’s “begin tackling the conceptual challenge of describing a stably self-reproducing decision criterion by inventing a simple formalism and confronting a crisp difficulty.”
Ok. I disagree that the paper is an important first step.
Because Eliezer is making an appeal based on psychological and sociological considerations, spelling out my reasoning requires discussion of what sorts of efforts are likely to impact the scientific community, and whether one can expect such research to occur by default. Discussing these requires discussion of psychology, sociology and economics, partly as related to whether the world’s elites will navigate the creation of AI just fine.
I’ve described a little bit of my reasoning, and will be elaborating on it in detail in future posts.
I look forward to it! Our models of how the scientific community works may be substantially different. To consider just one particularly relevant example, consider what the field of machine ethics looks like without the Yudkowskian line.
I agree that Eliezer has substantially altered the field of machine ethics. My view here is very much contingent on the belief that elites will navigate the creation of AI just fine, which, if true, is highly nonobvious.