Begin tackling the conceptual challenge of describing a stably self-reproducing decision criterion by inventing a simple formalism and confronting a crisp difficulty
is cost-effective relative to other options on the table?
...
BTW, I spent a large fraction of the first few months of 2013 weighing FAI research vs. other options before arriving at MIRI’s 2013 strategy (which focuses heavily on FAI research). So it’s not as though I think FAI research is obviously the superior path, and it’s also not as though we haven’t thought through all these different options, and gotten feedback from dozens of people about those options, and so on.
My comments were addressed at Eliezer’s paper specifically, rather than MIRI’s general strategy, or your own views.
Do you not see that what Luke wrote was a direct response to your question?
There are really two parts to the justification for working on the this paper: 1) Direct FAI research is a good thing to do now. 2) This is a good problem to work on within FAI research. Luke’s comment gives context explaining why MIRI is focusing on direct FAI research, in support of 1. And it’s clear from what you list as other options that you weren’t asking about 2.
It sounds like what you want is for this problem to be compared on its own to every other possible intervention. In theory that would be the rational thing to do to ensure you were always doing the most cost-effective work on the margin. But that only makes sense if it’s computationally practical to do that evaluation at every step.
What MIRI has chosen to do instead is to invest some time up front coming up with a strategic plan, and then follow through on that. This seem entirely reasonable to me.
If the probability is too small, then it isn’t worth it. The activities that I mention plausibly reduce astronomical waste to a nontrivial degree. Arguing that you can do better than them requires an argument that establishes the expected impact of MIRI Friendly AI research on AI safety above a nontrivial threshold.
Do you not see that what Luke wrote was a direct response to your question?
Which question?
Luke’s comment gives context explaining why MIRI is focusing on direct FAI research, in support of 1.
Sure, I acknowledge this.
It sounds like what you want is for this problem to be compared on its own to every other possible intervention. In theory that would be the rational thing to do to ensure you were always doing the most cost-effective work on the margin. But that only makes sense if it’s computationally practical to do that evaluation at every step.
I don’t think that it’s computationally intractable to come up with better alternatives. Indeed, I think that there are a number of concrete alternatives that are better.
What MIRI has chosen to do instead is to invest some time up front coming up with a strategic plan, and then follow through on that. This seem entirely reasonable to me.
I wasn’t disputing this. I was questioning the relevance of MIRI’s current research to AI safety, not saying that MIRI’s decision process is unreasonable.
The one I quoted: “Why do you think that … is cost-effective relative to other options on the table?”
Yes, you have a valid question about whether this Lob problem is relevant to AI safety.
What I found frustrating as a reader was that you asked why Eliezer was focusing on this problem as opposed to other options such as spreading rationality, building human capital, etc. Then when Luke responded with an explanation that MIRI had chosen to focus on FAI research, rather than those other types of work, you say, no I’m not asking about MIRI’s strategy or Luke’s views, I’m asking about this paper. But the reason Eliezer is working on this paper is because of MIRI’s strategy!
So that just struck me as sort of rude and/or missing the point of what Luke was trying to tell you. My apologies if I’ve been unnecessarily uncharitable in interpreting your comments.
What I found frustrating as a reader was that you asked why Eliezer was focusing on this problem as opposed to other options such as spreading rationality, building human capital, etc. Then when Luke responded with an explanation that MIRI had chosen to focus on FAI research, rather than those other types of work, you say, no I’m not asking about MIRI’s strategy or Luke’s views, I’m asking about this paper. But the reason Eliezer is working on this paper is because of MIRI’s strategy!
I read Luke’s comment differently, based on the preliminary “BTW.” My interpretation was that his purpose in making thecomment was to give a tangentially related contextual remark rather than to answer my question. (I wasn’t at all bothered by this – I’m just explaining why I didn’t respond to it as if it were intended to address my question.)
...
Do you not see that what Luke wrote was a direct response to your question?
There are really two parts to the justification for working on the this paper: 1) Direct FAI research is a good thing to do now. 2) This is a good problem to work on within FAI research. Luke’s comment gives context explaining why MIRI is focusing on direct FAI research, in support of 1. And it’s clear from what you list as other options that you weren’t asking about 2.
It sounds like what you want is for this problem to be compared on its own to every other possible intervention. In theory that would be the rational thing to do to ensure you were always doing the most cost-effective work on the margin. But that only makes sense if it’s computationally practical to do that evaluation at every step.
What MIRI has chosen to do instead is to invest some time up front coming up with a strategic plan, and then follow through on that. This seem entirely reasonable to me.
If the probability is too small, then it isn’t worth it. The activities that I mention plausibly reduce astronomical waste to a nontrivial degree. Arguing that you can do better than them requires an argument that establishes the expected impact of MIRI Friendly AI research on AI safety above a nontrivial threshold.
Which question?
Sure, I acknowledge this.
I don’t think that it’s computationally intractable to come up with better alternatives. Indeed, I think that there are a number of concrete alternatives that are better.
I wasn’t disputing this. I was questioning the relevance of MIRI’s current research to AI safety, not saying that MIRI’s decision process is unreasonable.
The one I quoted: “Why do you think that … is cost-effective relative to other options on the table?”
Yes, you have a valid question about whether this Lob problem is relevant to AI safety.
What I found frustrating as a reader was that you asked why Eliezer was focusing on this problem as opposed to other options such as spreading rationality, building human capital, etc. Then when Luke responded with an explanation that MIRI had chosen to focus on FAI research, rather than those other types of work, you say, no I’m not asking about MIRI’s strategy or Luke’s views, I’m asking about this paper. But the reason Eliezer is working on this paper is because of MIRI’s strategy!
So that just struck me as sort of rude and/or missing the point of what Luke was trying to tell you. My apologies if I’ve been unnecessarily uncharitable in interpreting your comments.
I read Luke’s comment differently, based on the preliminary “BTW.” My interpretation was that his purpose in making thecomment was to give a tangentially related contextual remark rather than to answer my question. (I wasn’t at all bothered by this – I’m just explaining why I didn’t respond to it as if it were intended to address my question.)
Ah, thanks for the clarification.