Well, if it turned out that something like “maximize suffering of intelligent agents” were written into the fabric of the universe, I think we’d have to conclude that we were inherently immoral agents.
The same evidence that persuades you that we don’t want to maximize suffering in real life is evidence that it wouldn’t be, I guess.
Side note: I’ve never seen anyone try to defend the position that we should be maximizing suffering, whereas I’ve seen all sorts of eloquent and mutually contradictory defenses of more, um, traditional ethical frameworks.
Well, if it turned out that something like “maximize suffering of intelligent agents” were written into the fabric of the universe, I think we’d have to conclude that we were inherently immoral agents.
The same evidence that persuades you that we don’t want to maximize suffering in real life is evidence that it wouldn’t be, I guess.
Side note: I’ve never seen anyone try to defend the position that we should be maximizing suffering, whereas I’ve seen all sorts of eloquent and mutually contradictory defenses of more, um, traditional ethical frameworks.