The intro to this video about a bridge collapse does a good job of describing why my hope for AI governance as a long term solution is low. I do think aiming to hold things back for a few years is a good idea. But, sooner or later, things fall through the cracks.
There’s a crack that the speaker touches on, that I’d like to know more of. None of the people able to understand that the safety reports meant “FIX THIS NOW OR IT WILL FALL DOWN” had the authority to direct the money to fix it. I’m thinking moral mazes rather than cracks.
Applying this to AI safety, do any of the organisations racing towards AGI have anyone authorised to shut a project down on account of a safety concern?
https://youtu.be/4mn0mC0cbi8?si=xZVX-FXn4dqUugpX
The intro to this video about a bridge collapse does a good job of describing why my hope for AI governance as a long term solution is low. I do think aiming to hold things back for a few years is a good idea. But, sooner or later, things fall through the cracks.
There’s a crack that the speaker touches on, that I’d like to know more of. None of the people able to understand that the safety reports meant “FIX THIS NOW OR IT WILL FALL DOWN” had the authority to direct the money to fix it. I’m thinking moral mazes rather than cracks.
Applying this to AI safety, do any of the organisations racing towards AGI have anyone authorised to shut a project down on account of a safety concern?