It is becoming increasingly clear that OpenAI is not appropriately prioritizing safety over advancing capabilities research.
OK
This was the default outcome.
OK
Without repercussions for terrible decisions, decision makers have no skin in the game.
It’s an article of faith for some people that that makes a difference, but I’ve never seen why.
I mean, many of the “decision makers” on these particular issues already believe that their actual, personal, biological skins are at stake, along with those of everybody else they know. And yet...
Anyone and everyone involved with Open Phil recommending a grant of $30 million dollars be given to OpenAI in 2017 shouldn’t be allowed anywhere near AI Safety decision making in the future.
Thinking “seven years from now, a significant number of independent players in a relatively large and diverse field might somehow band together to exclude me” seems very distant from the way I’ve seen actual humans make decisions.
Perhaps, but “seven years from now my reputation in my industry will drop markedly on the basis of this decision” seems to me like a normal human thing that happens all the time.
OK
OK
It’s an article of faith for some people that that makes a difference, but I’ve never seen why.
I mean, many of the “decision makers” on these particular issues already believe that their actual, personal, biological skins are at stake, along with those of everybody else they know. And yet...
Thinking “seven years from now, a significant number of independent players in a relatively large and diverse field might somehow band together to exclude me” seems very distant from the way I’ve seen actual humans make decisions.
Perhaps, but “seven years from now my reputation in my industry will drop markedly on the basis of this decision” seems to me like a normal human thing that happens all the time.