Although it’s possible that I just abused the one minimization-like part you accidentally left in there and there is some relatively simple patch that I’m not seeing.
I meant “resources used” in the sense of “resources directed towards this goal” rather than “resources drawn from the metropolitian utility company”- if the streetlamps play the stock market and accumulate a bunch of money, spending that money will still decrease their utility, and so unless they can spend the money in a way that improves the illumination cost-effectively they won’t.
Now, defining “resources directed towards this goal” in a way that’s machine-understandable is a hard problem. But if we already have an AI that thinks causally- such that it can actually make these plans and enact them- then it seems to me like that problem has already been solved.
Hm, all right, fair enough. That actually sounds plausible, assuming we can be sure that the AI appropriately takes account of something vaguely along the lines of “all resources that will be used in relation to this problem”, including, for example, creating a copy of itself that does not care about resources used and obfuscates its activities from the original. Which will probably be doable at that point.
I meant “resources used” in the sense of “resources directed towards this goal” rather than “resources drawn from the metropolitian utility company”- if the streetlamps play the stock market and accumulate a bunch of money, spending that money will still decrease their utility, and so unless they can spend the money in a way that improves the illumination cost-effectively they won’t.
Now, defining “resources directed towards this goal” in a way that’s machine-understandable is a hard problem. But if we already have an AI that thinks causally- such that it can actually make these plans and enact them- then it seems to me like that problem has already been solved.
Hm, all right, fair enough. That actually sounds plausible, assuming we can be sure that the AI appropriately takes account of something vaguely along the lines of “all resources that will be used in relation to this problem”, including, for example, creating a copy of itself that does not care about resources used and obfuscates its activities from the original. Which will probably be doable at that point.