I agree with you that unbounded utility functions are awful, but Eliezer made a more nuanced point in his post on scope insensitivity than you give him credit for. Suppose there are about 100 billion birds, and every year, about 10 million birds drown. Unless your utility function is very chaotic, it will be locally close to linear, so the difference in utility between 9,800,000 birds drowning this year and 10,000,000 birds drowning this year will be much larger than the difference in utility between 9,998,000 birds drowning this year and 10,000,000 birds drowning this year. Furthermore, even if you did have some weird threshold in your utility function, like you care a lot about whether or not fewer than 9,999,000 birds drown this year but you don’t care so much about how far from this threshold you get, you don’t know exactly how many birds will drown this year, so saving a much larger number of birds will result in a much higher chance of crossing your threshold. Thus being willing to spend about the same amount of resources to save 2,000 birds as to save 200,000 birds doesn’t make any sense. None of this relies on your utility function being being globally linear with respect to number of surviving birds.
Although utility functions can also be used to describe ethical systems, they are primarily designed to model preferences of individual agents, and I think your comments about moral philosophy are mostly irrelevant for that.
few people would support conclusions such as “it’s worth spending all our resources to prevent a 0.001% chance that 1e100 human lives will be created and tortured.”
A 0.001% chance of 10^100 humans being created just to be tortured would actually freak me out. Unless you were being literal about “all” of our resources, I think you should use a smaller probability.
I agree with you that unbounded utility functions are awful, but Eliezer made a more nuanced point in his post on scope insensitivity than you give him credit for. Suppose there are about 100 billion birds, and every year, about 10 million birds drown. Unless your utility function is very chaotic, it will be locally close to linear, so the difference in utility between 9,800,000 birds drowning this year and 10,000,000 birds drowning this year will be much larger than the difference in utility between 9,998,000 birds drowning this year and 10,000,000 birds drowning this year. Furthermore, even if you did have some weird threshold in your utility function, like you care a lot about whether or not fewer than 9,999,000 birds drown this year but you don’t care so much about how far from this threshold you get, you don’t know exactly how many birds will drown this year, so saving a much larger number of birds will result in a much higher chance of crossing your threshold. Thus being willing to spend about the same amount of resources to save 2,000 birds as to save 200,000 birds doesn’t make any sense. None of this relies on your utility function being being globally linear with respect to number of surviving birds.
Although utility functions can also be used to describe ethical systems, they are primarily designed to model preferences of individual agents, and I think your comments about moral philosophy are mostly irrelevant for that.
A 0.001% chance of 10^100 humans being created just to be tortured would actually freak me out. Unless you were being literal about “all” of our resources, I think you should use a smaller probability.