Even if both humans and an AGI start off equally alien to each other, one might be able to understand the other faster. We might reasonably worry that an AGI could understand us, and therefore get inside our OODA loop, well before we could understand it and get inside its OODA loop.
I don’t think it does go both ways—there’s a very real assymetry here. The AGIs we’re worried about will have PLENTY of human examples and training data, and humans have very little experience with AI.
That’s because we haven’t been trying to create safely different virtual environments. I don’t know how hard they are to make, but it seems like at least a scalable use of funding.
It goes both ways. We would be truly alien to an AGI trained in a reasonably different virtual environment.
Even if both humans and an AGI start off equally alien to each other, one might be able to understand the other faster. We might reasonably worry that an AGI could understand us, and therefore get inside our OODA loop, well before we could understand it and get inside its OODA loop.
I don’t think it does go both ways—there’s a very real assymetry here. The AGIs we’re worried about will have PLENTY of human examples and training data, and humans have very little experience with AI.
That’s because we haven’t been trying to create safely different virtual environments. I don’t know how hard they are to make, but it seems like at least a scalable use of funding.