To see this more clearly, you can replace the question, “Is this action good or bad?” to “Would an omniscient, moral person choose to take this action?”, and you can instantly see the answer can only be “yes” (good) or “no” (bad).
True. I’m not sure why that matters, though. It seems trivially obvious to me that a random action selected out of the set of all possible actions would have an overwhelming probability of being bad. But most agents don’t select actions randomly, so that doesn’t seem to be a problem. After all, the key aspect of intelligence is that it allows you to it extremely tiny targets in configuration space; the fact that most configurations of particles don’t give you a car doesn’t prevent human engineers from making cars. Why would the fact that most actions are bad prevent you from choosing a good one?
Also, why the heck do you think there exist words for “better” and “worse”?
Those are relative terms, meant to compare one action to another. That doesn’t mean you can’t classify an action as “good” or “bad”; for instance, if I decided to randomly select and kill 10 people today, that would be a unilaterally bad action, even if it would theoretically be “worse” if I decided to kill 11 people instead of 10. The difference between the two is like the difference between asking “Is this number bigger than that number?” and “Is this number positive or negative?”.
By that definition, almost all actions are bad.
Also, why the heck do you think there exist words for “better” and “worse”?
True. I’m not sure why that matters, though. It seems trivially obvious to me that a random action selected out of the set of all possible actions would have an overwhelming probability of being bad. But most agents don’t select actions randomly, so that doesn’t seem to be a problem. After all, the key aspect of intelligence is that it allows you to it extremely tiny targets in configuration space; the fact that most configurations of particles don’t give you a car doesn’t prevent human engineers from making cars. Why would the fact that most actions are bad prevent you from choosing a good one?
Those are relative terms, meant to compare one action to another. That doesn’t mean you can’t classify an action as “good” or “bad”; for instance, if I decided to randomly select and kill 10 people today, that would be a unilaterally bad action, even if it would theoretically be “worse” if I decided to kill 11 people instead of 10. The difference between the two is like the difference between asking “Is this number bigger than that number?” and “Is this number positive or negative?”.