cuts off some nuance, I would call this the projection of the collective intelligence agenda onto the AI safety frame of “eliminate the risk of very bad things happening” which I think is an incomplete way of looking at how to impact the future
in particular I tend to spend more time thinking about future worlds that are more like the current one in that they are messy and confusing and have very terrible and very good things happening simultaneously and a lot of the impact of collective intelligence tech (for good or ill) will determine the parameters of that world
cuts off some nuance, I would call this the projection of the collective intelligence agenda onto the AI safety frame of “eliminate the risk of very bad things happening” which I think is an incomplete way of looking at how to impact the future
in particular I tend to spend more time thinking about future worlds that are more like the current one in that they are messy and confusing and have very terrible and very good things happening simultaneously and a lot of the impact of collective intelligence tech (for good or ill) will determine the parameters of that world