This lead me to start thinking about whether we want to be able to to pursue “the moral theoretical truth” should such a truth exist, or if we want to find the most applicable and practical set of rules, such that reasonably intramentaly rational (human) agents could figure out what is best in any given situation.
Both? The latter needs to be judged by how closely it approximates the former. There are lots of moral rules that are easy to implement but not useful, e.g. “don’t do anything ever.” There’s a tradeoff that needs to be navigated between ease of implementation and accuracy of approximation to the Real Thing.
If you figured out the theoretically correct action, you wouldn’t need to approximate it. I mean figure out the theoretically correct moral theory, then approximate it to the best of your ability. You’re not approximating the output of an algorithm, you’re approximating an algorithm (e.g. because the correct algorithm requires too much data, or time, or rationality...).
Both? The latter needs to be judged by how closely it approximates the former. There are lots of moral rules that are easy to implement but not useful, e.g. “don’t do anything ever.” There’s a tradeoff that needs to be navigated between ease of implementation and accuracy of approximation to the Real Thing.
So, figure out the theoretical correct action, and then approximate it to the best of your ability?
If you figured out the theoretically correct action, you wouldn’t need to approximate it. I mean figure out the theoretically correct moral theory, then approximate it to the best of your ability. You’re not approximating the output of an algorithm, you’re approximating an algorithm (e.g. because the correct algorithm requires too much data, or time, or rationality...).
That’s a great way of saying it. Thanks a lot!