There’s always a nonzero chance that any action will cause an infinite bad. Also an infinite good.
Then how can you put error bounds on your estimate of your utility function?
If you say “I want to do the bestest for the mostest, so that’s what I’ll try to do” then that’s a fine goal. When you say “The reason I killed 500 million people was that according to my calculations it will do more good than harm, but I have absolutely no way to tell how correct my calculations are” then maybe something is wrong?
There’s always a nonzero chance that any action will cause an infinite bad. Also an infinite good.
Then how can you put error bounds on your estimate of your utility function?
If you say “I want to do the bestest for the mostest, so that’s what I’ll try to do” then that’s a fine goal. When you say “The reason I killed 500 million people was that according to my calculations it will do more good than harm, but I have absolutely no way to tell how correct my calculations are” then maybe something is wrong?