I don’t think it does go both ways—there’s a very real assymetry here. The AGIs we’re worried about will have PLENTY of human examples and training data, and humans have very little experience with AI.
That’s because we haven’t been trying to create safely different virtual environments. I don’t know how hard they are to make, but it seems like at least a scalable use of funding.
I don’t think it does go both ways—there’s a very real assymetry here. The AGIs we’re worried about will have PLENTY of human examples and training data, and humans have very little experience with AI.
That’s because we haven’t been trying to create safely different virtual environments. I don’t know how hard they are to make, but it seems like at least a scalable use of funding.