I guess we should have clarified this in the LW post, but I specifically asked Katja to make this LW post, in preparation for a project proposal blog post to be written later. So, MIRI wants this in the sense that I want it, at least.
Are you saying this is some thing which MIRI considers actively bad or are you just pointing out that this something which is not helpful for MIRI?
While I don’t see the benefit of this exercise I also don’t see any harm since for any idea which we come up with here some one else would very likely have come up with it before if it were actionable for humans.
It seemed pretty obvious to me that MIRI thinks defenses cannot be made, whether or not such a list exists, and wants easier ways to convince people that defenses cannot be made. Thus the part that said: “We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated. ”
Yes. I assume this is why she’s collecting these ideas.
Katja doesn’t speak for all of MIRI when she says above what “MIRI is interested in”.
In general MIRI isn’t in favor of soliciting storytelling about the singularity. It’s a waste of time and gives people a false sense that they understand things better than they do by incorrectly focusing their attention on highly salient, but ultimately unlikely scenarios.
OP: >>So MIRI is interested in making a better list of possible concrete routes to AI taking over the world. And for this, we ask your assistance.
Louie: >>Katja doesn’t speak for all of MIRI when she says above what “MIRI is interested in”.
These two statements contradict each other. If it’s true that Katja doesn’t speak for all of MIRI on this issue, perhaps MIRI has a PR issue and needs to issue guidance on how representatives of the organization present public requests. When reading the parent post, I concluded that MIRI leadership was on-board with this scenario-gathering exercise.
EDIT: Just read your profile and I realize you actually represent a portion of MIRI leadership. Recommend that Katja edit the parent post to reflect MIRI’s actual position on this request.
Louie, there appears to be a significant divergence between our models of AI’s power curve; my model puts p=.3 on the AI’s intelligence falling somewhere in or below the human range, and p=.6 on that sort of AI having to work on a tight deadline before humans kill it. In that case, improvements on the margin can make a difference. It’s not nearly as good as preventing a UFAI from existing or preventing it from getting Internet access, but I believe later defenses can be built with resources that do not funge.
This is quibbling over semantics, but I would count “don’t let the AI get to the point of existing and having an Internet-connected computer” as a valid defense. Additional defenses after that are likely to be underwhelming, but defense-in-depth is certainly desirable.
I wouldn’t characterize this as something that MIRI wants.
I guess we should have clarified this in the LW post, but I specifically asked Katja to make this LW post, in preparation for a project proposal blog post to be written later. So, MIRI wants this in the sense that I want it, at least.
Are you associated with MIRI?
Edit: I didn’t read further down, where the answer is made clear. Sorry, ignore this.
Are you saying this is some thing which MIRI considers actively bad or are you just pointing out that this something which is not helpful for MIRI?
While I don’t see the benefit of this exercise I also don’t see any harm since for any idea which we come up with here some one else would very likely have come up with it before if it were actionable for humans.
It seemed pretty obvious to me that the point of making such a list was to plan defenses.
Than you should reduce your confidence in what you consider obvious.
It seemed pretty obvious to me that MIRI thinks defenses cannot be made, whether or not such a list exists, and wants easier ways to convince people that defenses cannot be made. Thus the part that said: “We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated. ”
Yes. I assume this is why she’s collecting these ideas.
Katja doesn’t speak for all of MIRI when she says above what “MIRI is interested in”.
In general MIRI isn’t in favor of soliciting storytelling about the singularity. It’s a waste of time and gives people a false sense that they understand things better than they do by incorrectly focusing their attention on highly salient, but ultimately unlikely scenarios.
OP: >>So MIRI is interested in making a better list of possible concrete routes to AI taking over the world. And for this, we ask your assistance.
Louie: >>Katja doesn’t speak for all of MIRI when she says above what “MIRI is interested in”.
These two statements contradict each other. If it’s true that Katja doesn’t speak for all of MIRI on this issue, perhaps MIRI has a PR issue and needs to issue guidance on how representatives of the organization present public requests. When reading the parent post, I concluded that MIRI leadership was on-board with this scenario-gathering exercise.
EDIT: Just read your profile and I realize you actually represent a portion of MIRI leadership. Recommend that Katja edit the parent post to reflect MIRI’s actual position on this request.
Agreed. I am confused about what is going on here w.r.t. to what MIRI wants or believes.
Louie, there appears to be a significant divergence between our models of AI’s power curve; my model puts p=.3 on the AI’s intelligence falling somewhere in or below the human range, and p=.6 on that sort of AI having to work on a tight deadline before humans kill it. In that case, improvements on the margin can make a difference. It’s not nearly as good as preventing a UFAI from existing or preventing it from getting Internet access, but I believe later defenses can be built with resources that do not funge.
This is quibbling over semantics, but I would count “don’t let the AI get to the point of existing and having an Internet-connected computer” as a valid defense. Additional defenses after that are likely to be underwhelming, but defense-in-depth is certainly desirable.