Given an aligned AGI, to what extent are people ok with letting the AGI modify us? Examples of such modifications include (feel free to add to the list):
These are interesting questions that modern philosophers have been pondering. Stampy has an answer on forcing people to change faster than they would like and we are working on adding more answers that attempt to guess what an (aligned) superintelligence might do.
Given an aligned AGI, to what extent are people ok with letting the AGI modify us? Examples of such modifications include (feel free to add to the list):
Curing aging/illnesses
Significantly altering our biological form
Converting us to digital life forms
Reducing/Removing the capacity to suffer
Giving everyone instant jhanas/stream entry/etc.
Altering our desires to make them easier to satisfy
Increasing our intelligence (although this might be an alignment risk?)
Decreasing our intelligence
Refactoring our brains entirely
What exact parts of being “human” do we want to preserve?
These are interesting questions that modern philosophers have been pondering. Stampy has an answer on forcing people to change faster than they would like and we are working on adding more answers that attempt to guess what an (aligned) superintelligence might do.