If they actually simulate humans it seems like maybe legacy humans get outcompeted by simulated humans. I’m not sure that’s worse than what humans expected without technological transcendence (normal death, getting replaced by children and eventually conquering civilizations, etc). Assuming the LLMs that simulate humans well are moral patients (see anti zombie arguments).
It’s still not as good as could be achieved in principle. Seems like having the equivalent of “legal principles” that get used as training feedback could help. Plus direct human feedback. Maybe the system gets subverted eventually but the problem of humans getting replaced by em-like AIs is mostly a short term one of current humans being unhappy about that.
I did mention LLMs as myopic agents.
If they actually simulate humans it seems like maybe legacy humans get outcompeted by simulated humans. I’m not sure that’s worse than what humans expected without technological transcendence (normal death, getting replaced by children and eventually conquering civilizations, etc). Assuming the LLMs that simulate humans well are moral patients (see anti zombie arguments).
It’s still not as good as could be achieved in principle. Seems like having the equivalent of “legal principles” that get used as training feedback could help. Plus direct human feedback. Maybe the system gets subverted eventually but the problem of humans getting replaced by em-like AIs is mostly a short term one of current humans being unhappy about that.