I think the focus on a physical reference point here seems misguided. Perhaps more conceptually well-founded would be something like a search for a logical reference point, using your existence in some form at some level of abstraction and your reasoning about that logical reference point both as research of and as evidence about attractors in agentspace, via typical acausal means.
Vladimir Nesov’s decision theory mailing list comments on the role of observational uncertainty in ambient-like decision theories seems relevant. Not to imply he wouldn’t think what I’m saying here is complete nonsense.
In one of my imaginable-ideal-barely-possible worlds, Eliezer’s current choice of “thing to point your seed AI at and say ‘that’s where you’ll find morality content’” was tentatively determined to be what it currently nominally is (instead of tempting alternatives like “the thing that makes you think that your proposed initial dynamic is the best one” or “the thing that causes you to care about doing things like perfecting things like the choice of initial dynamic” or something) after he did a year straight of meditation on something like the lines of reasoning I suggest above, except honed to something like perfection-given-boundedness (e.g. something like the best you could reasonably expect to get at poker given that most of your energy has to be put into retaining your top 5 FIDE chess rating while writing a bestselling book popular science book).
I think the focus on a physical reference point here seems misguided. Perhaps more conceptually well-founded would be something like a search for a logical reference point, using your existence in some form at some level of abstraction and your reasoning about that logical reference point both as research of and as evidence about attractors in agentspace, via typical acausal means.
Vladimir Nesov’s decision theory mailing list comments on the role of observational uncertainty in ambient-like decision theories seems relevant. Not to imply he wouldn’t think what I’m saying here is complete nonsense.
In one of my imaginable-ideal-barely-possible worlds, Eliezer’s current choice of “thing to point your seed AI at and say ‘that’s where you’ll find morality content’” was tentatively determined to be what it currently nominally is (instead of tempting alternatives like “the thing that makes you think that your proposed initial dynamic is the best one” or “the thing that causes you to care about doing things like perfecting things like the choice of initial dynamic” or something) after he did a year straight of meditation on something like the lines of reasoning I suggest above, except honed to something like perfection-given-boundedness (e.g. something like the best you could reasonably expect to get at poker given that most of your energy has to be put into retaining your top 5 FIDE chess rating while writing a bestselling book popular science book).