Yes. I assume this is why she’s collecting these ideas.
Katja doesn’t speak for all of MIRI when she says above what “MIRI is interested in”.
In general MIRI isn’t in favor of soliciting storytelling about the singularity. It’s a waste of time and gives people a false sense that they understand things better than they do by incorrectly focusing their attention on highly salient, but ultimately unlikely scenarios.
OP: >>So MIRI is interested in making a better list of possible concrete routes to AI taking over the world. And for this, we ask your assistance.
Louie: >>Katja doesn’t speak for all of MIRI when she says above what “MIRI is interested in”.
These two statements contradict each other. If it’s true that Katja doesn’t speak for all of MIRI on this issue, perhaps MIRI has a PR issue and needs to issue guidance on how representatives of the organization present public requests. When reading the parent post, I concluded that MIRI leadership was on-board with this scenario-gathering exercise.
EDIT: Just read your profile and I realize you actually represent a portion of MIRI leadership. Recommend that Katja edit the parent post to reflect MIRI’s actual position on this request.
Louie, there appears to be a significant divergence between our models of AI’s power curve; my model puts p=.3 on the AI’s intelligence falling somewhere in or below the human range, and p=.6 on that sort of AI having to work on a tight deadline before humans kill it. In that case, improvements on the margin can make a difference. It’s not nearly as good as preventing a UFAI from existing or preventing it from getting Internet access, but I believe later defenses can be built with resources that do not funge.
Yes. I assume this is why she’s collecting these ideas.
Katja doesn’t speak for all of MIRI when she says above what “MIRI is interested in”.
In general MIRI isn’t in favor of soliciting storytelling about the singularity. It’s a waste of time and gives people a false sense that they understand things better than they do by incorrectly focusing their attention on highly salient, but ultimately unlikely scenarios.
OP: >>So MIRI is interested in making a better list of possible concrete routes to AI taking over the world. And for this, we ask your assistance.
Louie: >>Katja doesn’t speak for all of MIRI when she says above what “MIRI is interested in”.
These two statements contradict each other. If it’s true that Katja doesn’t speak for all of MIRI on this issue, perhaps MIRI has a PR issue and needs to issue guidance on how representatives of the organization present public requests. When reading the parent post, I concluded that MIRI leadership was on-board with this scenario-gathering exercise.
EDIT: Just read your profile and I realize you actually represent a portion of MIRI leadership. Recommend that Katja edit the parent post to reflect MIRI’s actual position on this request.
Agreed. I am confused about what is going on here w.r.t. to what MIRI wants or believes.
Louie, there appears to be a significant divergence between our models of AI’s power curve; my model puts p=.3 on the AI’s intelligence falling somewhere in or below the human range, and p=.6 on that sort of AI having to work on a tight deadline before humans kill it. In that case, improvements on the margin can make a difference. It’s not nearly as good as preventing a UFAI from existing or preventing it from getting Internet access, but I believe later defenses can be built with resources that do not funge.