In other words, to maximize the chance for aligned AI, we must first make an aligned society.
“An aligned society” sounds like a worthy goal, but I’m not sure who “we” is in terms of specific people who can take specific actions towards that end.
I think proposals like this would benefit from specifying what the minimum viable “we” for the proposal to work is.
“An aligned society” sounds like a worthy goal, but I’m not sure who “we” is in terms of specific people who can take specific actions towards that end.
I think proposals like this would benefit from specifying what the minimum viable “we” for the proposal to work is.