I have to admit, that was sloppily phrased. However, you do seem to be defining “OK” as equivalent to “actively good” whereas I’m using something more like “acceptable”.
Well, I’d accept strictly neutral (neither actively evil nor actively good) as OK as well. It seems that your definition of OK includes the possibility of active evil, as long as the amount of active evil is below a certain threshold.
It seems that we’re in agreement here; whether or not it is “OK” is defined by the definitions we are assigning to OK, and not to any part of the model under consideration.
The threshold being whether I can be bothered to stop it. As I said, it was sloppy terminology—I should have said something like “worth less than the effort of telling someone to stop” or some other minuscule cost you would be unwilling to pay. Since any intervention, in real life, has a cost, albeit sometimes a small one, this seems like an important distinction.
I have to admit, that was sloppily phrased. However, you do seem to be defining “OK” as equivalent to “actively good” whereas I’m using something more like “acceptable”.
Well, I’d accept strictly neutral (neither actively evil nor actively good) as OK as well. It seems that your definition of OK includes the possibility of active evil, as long as the amount of active evil is below a certain threshold.
It seems that we’re in agreement here; whether or not it is “OK” is defined by the definitions we are assigning to OK, and not to any part of the model under consideration.
The threshold being whether I can be bothered to stop it. As I said, it was sloppy terminology—I should have said something like “worth less than the effort of telling someone to stop” or some other minuscule cost you would be unwilling to pay. Since any intervention, in real life, has a cost, albeit sometimes a small one, this seems like an important distinction.