Hi, I’m the author of that post. My best guess at the moment is that we need a way to calculate “if do(action), then universe pays x”, where “do” notation encapsulates relevant things we don’t know yet about logical uncertainty, like how an AI can separate itself from its logical parent nodes (or its output from its computation) so that it temporarily forgets that its computation maximizes expected utility.
Very likely yes. Now ask if I know how to avoid the wrath of Löb’s theorem.
Do you know how to avoid the wrath of Lob’s theorem?
Not yet.
What kind of powers are you hoping for beyond this sort of thing?
Hi, I’m the author of that post. My best guess at the moment is that we need a way to calculate “if do(action), then universe pays x”, where “do” notation encapsulates relevant things we don’t know yet about logical uncertainty, like how an AI can separate itself from its logical parent nodes (or its output from its computation) so that it temporarily forgets that its computation maximizes expected utility.
For someone making a desperate effort to not be a cult leader, you really do enjoy arbitrarily ordering people around, don’t you?
</humour possibly subject to Poe’s law>