Was Space Hitler awesome? Yes. Was Space Hitler good? No. If you say “morality is what is awesome,” then you are either explicitly signing on to a morality in which the thing to be maximized is the glorious actions of supermen, not the petty happiness of the masses, or you are misusing the word “awesome.”
Was Space Hitler awesome? Yes. Was Space Hitler good? No.
This doesn’t seem to pose any kind of contradiction or problem for the “Morality is Awesome” statement, though I agree with you about the rest of your comment.
Is Space Hitler awesome? Yes. Is saving everyone from Space Hitler such that no harm is done to anyone even more awesome? Hell yes.
Remember, we’re dealing with a potentially-infinite search space of yet-unknown properties with a superintelligence attempting to maximize total awesomeness within that space. You’re going to find lots of Ninja-Robot-Pirate-BountyHunter-Jedi-Superheroes fighting off the hordes of Evil-Nazi-Mutant-Zombie-Alien-Viking-Spider-Henchmen, and winning.
And what’s more awesome than a Ninja-Robot-Pirate-BountyHunter-Jedi-Superhero? Being one. And what’s more awesome than being a Ninja-Robot-Pirate-BountyHunter-Jedi-Superhero? Being a billion of them!
Suppose a disaster could be prevented by foresight, or narrowly averted by heroic action. Which one is more
awesome? Which one is better?
Preventing disaster by foresight is more likely to work than narrow aversion by heroic action, so the the awesomeness of foresight working gets multiplied by a larger probability than the awesomeness of heroic action working when you decide to take one approach over the other. This advantage of the action that is more likely to work belongs in decision theory, not your utility function. Your utility function just says whether one approach is sufficiently more awesome than the other to overcome its decision theoretic disadvantage. This depends on the probabilities and awesomeness in the specific situation.
My numerous words are defeated by your single link. This analogy is irrelevant, but illustrates your point well.
Anyway, that’s pretty much all I had to say. The initial argument I was responding to sounded weak, but your arguments now seem much stronger. They do, after all, single-handedly defeat an army of Ninja-Robot-… of those.
Was Space Hitler awesome? Yes. Was Space Hitler good? No. If you say “morality is what is awesome,” then you are either explicitly signing on to a morality in which the thing to be maximized is the glorious actions of supermen, not the petty happiness of the masses, or you are misusing the word “awesome.”
This doesn’t seem to pose any kind of contradiction or problem for the “Morality is Awesome” statement, though I agree with you about the rest of your comment.
Is Space Hitler awesome? Yes. Is saving everyone from Space Hitler such that no harm is done to anyone even more awesome? Hell yes.
Remember, we’re dealing with a potentially-infinite search space of yet-unknown properties with a superintelligence attempting to maximize total awesomeness within that space. You’re going to find lots of Ninja-Robot-Pirate-BountyHunter-Jedi-Superheroes fighting off the hordes of Evil-Nazi-Mutant-Zombie-Alien-Viking-Spider-Henchmen, and winning.
And what’s more awesome than a Ninja-Robot-Pirate-BountyHunter-Jedi-Superhero? Being one. And what’s more awesome than being a Ninja-Robot-Pirate-BountyHunter-Jedi-Superhero? Being a billion of them!
Suppose a disaster could be prevented by foresight, or narrowly averted by heroic action. Which one is more awesome? Which one is better?
Tvtropes link: Really?
Preventing disaster by foresight is more likely to work than narrow aversion by heroic action, so the the awesomeness of foresight working gets multiplied by a larger probability than the awesomeness of heroic action working when you decide to take one approach over the other. This advantage of the action that is more likely to work belongs in decision theory, not your utility function. Your utility function just says whether one approach is sufficiently more awesome than the other to overcome its decision theoretic disadvantage. This depends on the probabilities and awesomeness in the specific situation.
My numerous words are defeated by your single link. This analogy is irrelevant, but illustrates your point well.
Anyway, that’s pretty much all I had to say. The initial argument I was responding to sounded weak, but your arguments now seem much stronger. They do, after all, single-handedly defeat an army of Ninja-Robot-… of those.