I think the main reason is simple. It’s hard to create a transparent/reliable agent without decision theory. Also, since we’re talking about a super-power agent, you don’t want to mess this up. CDT and EDT are known to mess up, so it would be very helpful to find a “correct” decision theory. Though you may somehow be able to get around it by letting an AI self-improve, it would be nice to have one less thing to worry about, especially because how the AI improves is itself a decision.
I think the main reason is simple. It’s hard to create a transparent/reliable agent without decision theory. Also, since we’re talking about a super-power agent, you don’t want to mess this up. CDT and EDT are known to mess up, so it would be very helpful to find a “correct” decision theory. Though you may somehow be able to get around it by letting an AI self-improve, it would be nice to have one less thing to worry about, especially because how the AI improves is itself a decision.