This doesn’t solve the problem of aligning the ASI with the goal described above. This only replaces “aligning AGI with humans values” by “aligning AGI to run instances of our universe”. Yet, this seems to ease the problem by having a simpler objective: “predict the next step of the sentient part of the universe in a loop”. (Finally, I don’t know how, but we may use the fact that physics laws are constant and unchangeable, to my knowledge.)
Yeah, this would be half of my central concern. It just doesn’t seem particularly easier to specify ideas like “run a simulation, but only worry about getting the sentient parts right” than it does to specify “fulfill human values.” And then once we’ve got that far, I do think it’s significantly more valuable to go for human-aligned AI than to roll the dice again.
Yeah, this would be half of my central concern. It just doesn’t seem particularly easier to specify ideas like “run a simulation, but only worry about getting the sentient parts right” than it does to specify “fulfill human values.” And then once we’ve got that far, I do think it’s significantly more valuable to go for human-aligned AI than to roll the dice again.