If the probability is too small, then it isn’t worth it. The activities that I mention plausibly reduce astronomical waste to a nontrivial degree. Arguing that you can do better than them requires an argument that establishes the expected impact of MIRI Friendly AI research on AI safety above a nontrivial threshold.
Do you not see that what Luke wrote was a direct response to your question?
Which question?
Luke’s comment gives context explaining why MIRI is focusing on direct FAI research, in support of 1.
Sure, I acknowledge this.
It sounds like what you want is for this problem to be compared on its own to every other possible intervention. In theory that would be the rational thing to do to ensure you were always doing the most cost-effective work on the margin. But that only makes sense if it’s computationally practical to do that evaluation at every step.
I don’t think that it’s computationally intractable to come up with better alternatives. Indeed, I think that there are a number of concrete alternatives that are better.
What MIRI has chosen to do instead is to invest some time up front coming up with a strategic plan, and then follow through on that. This seem entirely reasonable to me.
I wasn’t disputing this. I was questioning the relevance of MIRI’s current research to AI safety, not saying that MIRI’s decision process is unreasonable.
The one I quoted: “Why do you think that … is cost-effective relative to other options on the table?”
Yes, you have a valid question about whether this Lob problem is relevant to AI safety.
What I found frustrating as a reader was that you asked why Eliezer was focusing on this problem as opposed to other options such as spreading rationality, building human capital, etc. Then when Luke responded with an explanation that MIRI had chosen to focus on FAI research, rather than those other types of work, you say, no I’m not asking about MIRI’s strategy or Luke’s views, I’m asking about this paper. But the reason Eliezer is working on this paper is because of MIRI’s strategy!
So that just struck me as sort of rude and/or missing the point of what Luke was trying to tell you. My apologies if I’ve been unnecessarily uncharitable in interpreting your comments.
What I found frustrating as a reader was that you asked why Eliezer was focusing on this problem as opposed to other options such as spreading rationality, building human capital, etc. Then when Luke responded with an explanation that MIRI had chosen to focus on FAI research, rather than those other types of work, you say, no I’m not asking about MIRI’s strategy or Luke’s views, I’m asking about this paper. But the reason Eliezer is working on this paper is because of MIRI’s strategy!
I read Luke’s comment differently, based on the preliminary “BTW.” My interpretation was that his purpose in making thecomment was to give a tangentially related contextual remark rather than to answer my question. (I wasn’t at all bothered by this – I’m just explaining why I didn’t respond to it as if it were intended to address my question.)
If the probability is too small, then it isn’t worth it. The activities that I mention plausibly reduce astronomical waste to a nontrivial degree. Arguing that you can do better than them requires an argument that establishes the expected impact of MIRI Friendly AI research on AI safety above a nontrivial threshold.
Which question?
Sure, I acknowledge this.
I don’t think that it’s computationally intractable to come up with better alternatives. Indeed, I think that there are a number of concrete alternatives that are better.
I wasn’t disputing this. I was questioning the relevance of MIRI’s current research to AI safety, not saying that MIRI’s decision process is unreasonable.
The one I quoted: “Why do you think that … is cost-effective relative to other options on the table?”
Yes, you have a valid question about whether this Lob problem is relevant to AI safety.
What I found frustrating as a reader was that you asked why Eliezer was focusing on this problem as opposed to other options such as spreading rationality, building human capital, etc. Then when Luke responded with an explanation that MIRI had chosen to focus on FAI research, rather than those other types of work, you say, no I’m not asking about MIRI’s strategy or Luke’s views, I’m asking about this paper. But the reason Eliezer is working on this paper is because of MIRI’s strategy!
So that just struck me as sort of rude and/or missing the point of what Luke was trying to tell you. My apologies if I’ve been unnecessarily uncharitable in interpreting your comments.
I read Luke’s comment differently, based on the preliminary “BTW.” My interpretation was that his purpose in making thecomment was to give a tangentially related contextual remark rather than to answer my question. (I wasn’t at all bothered by this – I’m just explaining why I didn’t respond to it as if it were intended to address my question.)
Ah, thanks for the clarification.