I am trying to hint at the possibility that our methods might be mathematically justified but that they might lead to unexpected side-effects when applied by computationally bounded agents under extreme circumstances, as some of our thought experiments indicate.
...if you already know that the ‘rational thing’ is a mistake then it isn’t the rational thing.
Our methods are the best we have and they work perfectly well on most problems we encounter. I am saying that we should discount some of the associated utility implications if we encounter edge cases. Ignoring the the implications would be irrational but taking them at face value wouldn’t be wise either.
I am trying to hint at the possibility that our methods might be mathematically justified but that they might lead to unexpected side-effects when applied by computationally bounded agents under extreme circumstances, as some of our thought experiments indicate.
Our methods are the best we have and they work perfectly well on most problems we encounter. I am saying that we should discount some of the associated utility implications if we encounter edge cases. Ignoring the the implications would be irrational but taking them at face value wouldn’t be wise either.