Ah, the LW approach. I would argue exactly the opposite: look for examples of successful decision heuristics and emulate those. Check consequentialism only when your rules of thumb disagree.
A more nuanced view of going meta might be the Hansonian method of collecting a large amount puzzles and only going meta to find explanations that leave the fewest mysteries and greatest number of accurate predictions. The exhortation to wait until you have a large collection of mysteries that may have common threads seems to be essential to the way he thinks.
This depends largely on how many cycles you have to burn before you have to make a “moral decision”. If you are in a dark alleyway and someone is walking towards you brandishing a knife, then it probably isn’t a good time to “go meta” (unless climbing up the fire escape is “going meta”).
Personally, my “self” would not be called upon to try to solve that decision problem; decisions would be made by only semi-self-like cognitive processes. There may be more precise examples.
Go meta. If that doesn’t work, go meta. If it does work, go meta. (This is especially useful for ethics but applies everywhere.)
Ah, the LW approach. I would argue exactly the opposite: look for examples of successful decision heuristics and emulate those. Check consequentialism only when your rules of thumb disagree.
A more nuanced view of going meta might be the Hansonian method of collecting a large amount puzzles and only going meta to find explanations that leave the fewest mysteries and greatest number of accurate predictions. The exhortation to wait until you have a large collection of mysteries that may have common threads seems to be essential to the way he thinks.
This depends largely on how many cycles you have to burn before you have to make a “moral decision”. If you are in a dark alleyway and someone is walking towards you brandishing a knife, then it probably isn’t a good time to “go meta” (unless climbing up the fire escape is “going meta”).
Personally, my “self” would not be called upon to try to solve that decision problem; decisions would be made by only semi-self-like cognitive processes. There may be more precise examples.
Don’t get so caught up going meta that you loose sight of the object level.