If the universe computes things that are not computational continuations of the human condition (which might include resolution to our moral quandaries, if that is in the cards), then it is, with respect to optimizing function g, wasting the perfectly good computational depth achieved by humanity so far. So, driving computation that is not somehow reflective of where humanity was already going is undesirable. The computational work that is favored is work that makes the most of what humanity was up to anyway.
To the extent that human moral progress in a complex society is a difficult computational problem, and there’s lots of evidence that it is, then that is the sort of thing that would be favored by objective g.
If moral progress of humanity is something that has a stable conclusion (i.e., humanity at some point halts or goes into a harmonious infinite loop that does not increase in depth) then objective g will not help us hit that mark. But in that case, it should be computationally feasible to derive a better objective function.
To those who are unsatisfied with objective g as a solution to Problem 2, I pose the problem: is there a way to modify objective g so that it prioritizes morally better futures? If not, I maintain the objective g is still pretty good.
Maybe this will be more helpful:
If the universe computes things that are not computational continuations of the human condition (which might include resolution to our moral quandaries, if that is in the cards), then it is, with respect to optimizing function g, wasting the perfectly good computational depth achieved by humanity so far. So, driving computation that is not somehow reflective of where humanity was already going is undesirable. The computational work that is favored is work that makes the most of what humanity was up to anyway.
To the extent that human moral progress in a complex society is a difficult computational problem, and there’s lots of evidence that it is, then that is the sort of thing that would be favored by objective g.
If moral progress of humanity is something that has a stable conclusion (i.e., humanity at some point halts or goes into a harmonious infinite loop that does not increase in depth) then objective g will not help us hit that mark. But in that case, it should be computationally feasible to derive a better objective function.
To those who are unsatisfied with objective g as a solution to Problem 2, I pose the problem: is there a way to modify objective g so that it prioritizes morally better futures? If not, I maintain the objective g is still pretty good.