This is why I’m crossing my fingers for a ‘survivable disaster’ - an AI that merely kills a lot of people instead of everyone. Maybe then people would take it seriously.
Coming up with a solution for spreading awareness of the problem is a difficult and important problem that ordinary people can actually tackle, and that’s what I want to try.
maybe some type of oppositional game can help in this regard?
Along the same lines as the AI Box experiment. We have one group “trying to be the worst case AI” starting right at this moment. Not a hypothetical “worst case” but one taken from this moment in time, as if you were an engineer trying to facilitate the worst AI possible.
The Worst Casers propose one “step” forward in engineering. Then we have some sort of Reality Checking team (maybe just a general crowd vote?), where they rate to disprove the feasbility of the step, given the conditions that exist in the scenario so far. Anyone else can subit a “worse-Worst Case” if it is easier / faster / larger magnitude than the standing one.
Over time the goal is to crowd source the shortest credible path to the worst possible outcome, which if done very well, migth actually reach the realm of colloquial communicability.
I’ve started coding editable logic trees like this as web apps before, so if that makes any sense I could make it public while I work on it.
Another possibility is to get Steven Spielberg to make a movie but force him to have Yud as the script writer.
Based on a few of his recent tweets, I’m hoping for a serious way to turn Elon Musk back in the direction he used to be facing and get him to publically go hard on the importance of the field of alignment. It’d be too much to hope for though to get him to actually fund any researchers, though. Maybe someone else.
This is why I’m crossing my fingers for a ‘survivable disaster’ - an AI that merely kills a lot of people instead of everyone. Maybe then people would take it seriously.
Coming up with a solution for spreading awareness of the problem is a difficult and important problem that ordinary people can actually tackle, and that’s what I want to try.
maybe some type of oppositional game can help in this regard?
Along the same lines as the AI Box experiment. We have one group “trying to be the worst case AI” starting right at this moment. Not a hypothetical “worst case” but one taken from this moment in time, as if you were an engineer trying to facilitate the worst AI possible.
The Worst Casers propose one “step” forward in engineering. Then we have some sort of Reality Checking team (maybe just a general crowd vote?), where they rate to disprove the feasbility of the step, given the conditions that exist in the scenario so far. Anyone else can subit a “worse-Worst Case” if it is easier / faster / larger magnitude than the standing one.
Over time the goal is to crowd source the shortest credible path to the worst possible outcome, which if done very well, migth actually reach the realm of colloquial communicability.
I’ve started coding editable logic trees like this as web apps before, so if that makes any sense I could make it public while I work on it.
Another possibility is to get Steven Spielberg to make a movie but force him to have Yud as the script writer.
Based on a few of his recent tweets, I’m hoping for a serious way to turn Elon Musk back in the direction he used to be facing and get him to publically go hard on the importance of the field of alignment. It’d be too much to hope for though to get him to actually fund any researchers, though. Maybe someone else.