It doesn’t make decisions, since the process of selecting a “critical point” is not specified, only some informal heuristics for doing so.
Uh huh—well that seems kind-of appropriate for a resource-limited agent. The more of the universe you consider, the harder that becomes—so the more powerful the agent has to be to be able to do it.
Yudkowsky’s idea has agents hunting through all spacetime for decision processes which are correlated with theirs—which is enormously-more expensive—and seems much less likely to lead to any decisions actually being made in real time. The DBDT version of that would be to put the “critical point” at the beginning of time.
However, a means of cutting down the work required to make a decision seems to be an interesting and potentially-useful idea to me. If an agent can ignore much of the universe when making a decision, it is interesting to be aware of that—and indeed necessary if we want to build a practical system.
It doesn’t make decisions, since the process of selecting a “critical point” is not specified, only some informal heuristics for doing so.
Uh huh—well that seems kind-of appropriate for a resource-limited agent. The more of the universe you consider, the harder that becomes—so the more powerful the agent has to be to be able to do it.
Yudkowsky’s idea has agents hunting through all spacetime for decision processes which are correlated with theirs—which is enormously-more expensive—and seems much less likely to lead to any decisions actually being made in real time. The DBDT version of that would be to put the “critical point” at the beginning of time.
However, a means of cutting down the work required to make a decision seems to be an interesting and potentially-useful idea to me. If an agent can ignore much of the universe when making a decision, it is interesting to be aware of that—and indeed necessary if we want to build a practical system.