I agree that with temporal discounting, my argument may not be valid in all cases. However, depending on the discount rate, even then increasing computing power/intelligence may raise the expected value enough to justify this increase for a long time. In the case of the squiggle maximizer, turning the whole visible universe into squiggles beats turning earth into squiggles by such a huge factor that even a high discount rate would justify postponing actually making any squiggles to the future, at least for a while. So in cases with high discount rates, it largely depends on how big the AI predicts the intelligence gain will be.
A different question is whether a discount rate in a value function would be such a good idea from a human perspective. Just imagine the consequences of discounting the values of “happiness” or “freedom”. Climate change is in large part a result of (unconsciously/implicitly) discounting the future IMO.
I agree that with temporal discounting, my argument may not be valid in all cases. However, depending on the discount rate, even then increasing computing power/intelligence may raise the expected value enough to justify this increase for a long time. In the case of the squiggle maximizer, turning the whole visible universe into squiggles beats turning earth into squiggles by such a huge factor that even a high discount rate would justify postponing actually making any squiggles to the future, at least for a while. So in cases with high discount rates, it largely depends on how big the AI predicts the intelligence gain will be.
A different question is whether a discount rate in a value function would be such a good idea from a human perspective. Just imagine the consequences of discounting the values of “happiness” or “freedom”. Climate change is in large part a result of (unconsciously/implicitly) discounting the future IMO.