until the humans start bothering to pay much attention to their infinitely many different contexts and correlated algorithms, each of which might split differently and mess up the calculations.
This reminds me of the experience of screwing things up in ordinary situations. I’m flinching away from the thought of doing it wrong, but this is actually allocating attention to the wrong procedure, which makes me more likely to fail. “Losing the faith”. In the context of your quote this might mean paying attention to algorithms that aren’t correlated with you, which results in them having more control over your actions.
what if “your” decisions are relevant (“significant”) for some “other” agent that doesn’t share “your” “actual” “values”?
Another alternative is to think of relative significance as value, and doing the right thing would mean increasing relative significance. A few months ago I was daydreaming about how you could think of reinforcement as value, and this seems to be a similar line of thought.
This reminds me of the experience of screwing things up in ordinary situations. I’m flinching away from the thought of doing it wrong, but this is actually allocating attention to the wrong procedure, which makes me more likely to fail. “Losing the faith”. In the context of your quote this might mean paying attention to algorithms that aren’t correlated with you, which results in them having more control over your actions.
Another alternative is to think of relative significance as value, and doing the right thing would mean increasing relative significance. A few months ago I was daydreaming about how you could think of reinforcement as value, and this seems to be a similar line of thought.