Dientological rules are almost directly based on experimental experience, and utilitarian statements are very complex arguments.
“If you’ve truly understood the reason and the rhythm behind ethics, then one major sign is that, augmented by this newfound knowledge, you don’t do those things that previously seemed like ethical transgressions. Only now you know why.”
In other words, your theory should describe the facts well. Let’s say we know that 90% of the people who decided to do X with the best intentions ended up being villains. But if in such a situation it seems to you that if YOU had done X without moral preparation, then you definitely would not have gone over to the dark side of the force… this means that your theory does not explain the data well. But if your usual consequentialist morality produces 90% of the results similar to the consequences of the dientological rules, then I solemnly declare that your consequentialist morality is practically perfect, and those 10% of discrepancies are errors of dientology, and I advise you to trust your consequentialist morality.
“They don’t mention the problem of running on corrupted hardware. They don’t mention the idea that lies have to be recursively protected from all the truths and all the truthfinding techniques that threaten them. They don’t mention that honest ways have a simplicity that dishonest ways often lack. They don’t talk about black-swan bets. They don’t talk about the terrible nakedness of discarding the last defense you have against yourself, and trying to survive on raw calculation.”
In this paragraph, in fact, Eliezer says: “the world is more complicated than it seems, and we do not fully understand it, so complex theories work worse than they seem, so trust the dientological rules (simple theories)”
And one more thing: it seems that when you break a dientological rule, it would be wise to remember it as: “yeah, I broke a dientological rule. I don’t see exactly where I went wrong, but in any case, this is Bayesian evidence that I was wrong.” And this evidence may be decisive and change the result of reflection, or it may not be.
Or, to summarize this essay:
Dientological rules are almost directly based on experimental experience, and utilitarian statements are very complex arguments.
“If you’ve truly understood the reason and the rhythm behind ethics, then one major sign is that, augmented by this newfound knowledge, you don’t do those things that previously seemed like ethical transgressions. Only now you know why.”
In other words, your theory should describe the facts well. Let’s say we know that 90% of the people who decided to do X with the best intentions ended up being villains. But if in such a situation it seems to you that if YOU had done X without moral preparation, then you definitely would not have gone over to the dark side of the force… this means that your theory does not explain the data well. But if your usual consequentialist morality produces 90% of the results similar to the consequences of the dientological rules, then I solemnly declare that your consequentialist morality is practically perfect, and those 10% of discrepancies are errors of dientology, and I advise you to trust your consequentialist morality.
“They don’t mention the problem of running on corrupted hardware. They don’t mention the idea that lies have to be recursively protected from all the truths and all the truthfinding techniques that threaten them. They don’t mention that honest ways have a simplicity that dishonest ways often lack. They don’t talk about black-swan bets. They don’t talk about the terrible nakedness of discarding the last defense you have against yourself, and trying to survive on raw calculation.”
In this paragraph, in fact, Eliezer says: “the world is more complicated than it seems, and we do not fully understand it, so complex theories work worse than they seem, so trust the dientological rules (simple theories)”
And one more thing: it seems that when you break a dientological rule, it would be wise to remember it as: “yeah, I broke a dientological rule. I don’t see exactly where I went wrong, but in any case, this is Bayesian evidence that I was wrong.” And this evidence may be decisive and change the result of reflection, or it may not be.