I agree, though I haven’t seen many proposing that, but also see So8res’ Decision theory does not imply that we get to have nice things, though this is coming from the opposite direction (with the start being about people invalidly assuming too much out of LDT cooperation)
Though for our morals, I do think there’s an active question of which pieces we feel better replacing with the more formal understanding, because there isn’t a sharp distinction between our utility function and our decision theory. Some values trump others when given better tools. Though I agree that replacing all the altruism components is many steps farther than is the best solution in that regard.
I agree, though I haven’t seen many proposing that, but also see So8res’ Decision theory does not imply that we get to have nice things, though this is coming from the opposite direction (with the start being about people invalidly assuming too much out of LDT cooperation)
Though for our morals, I do think there’s an active question of which pieces we feel better replacing with the more formal understanding, because there isn’t a sharp distinction between our utility function and our decision theory. Some values trump others when given better tools. Though I agree that replacing all the altruism components is many steps farther than is the best solution in that regard.