Some instantiations of the first problem (How to prevent “aligned” AIs from unintentionally corrupting human values?) seem to me to be some of the easily imaginable ways to existential risk—e.g. almost all people spending lives in an addictive VR. I’m not sure if it is really neglected?
Some instantiations of the first problem (How to prevent “aligned” AIs from unintentionally corrupting human values?) seem to me to be some of the easily imaginable ways to existential risk—e.g. almost all people spending lives in an addictive VR. I’m not sure if it is really neglected?