Well, those examples would have a lot of “okay we can’t calculate utility here, so we’ll use a principle” and far less faith in direct utilitarianism.
With the torture and dust specks, see, it arrives at counter intuitive conclusion, but it is not proof grade reasoning by any means. Who knows, maybe the correct algorithm for evaluation of torture vs dust specks must have BusyBeaver(10) for the torture, and BusyBeaver(9) for dust specks, or something equally outrageously huge (after all, thought, which is being screwed with by torture, is turing-complete). The 3^^^3 is not a very big number. There are numbers which are big like you wouldn’t believe.
edit: also I think even vastly superhuman entities wouldn’t be very good at consequence evaluation, especially from uncertain start state. In any case, some sorta morality oracle would have to be able to, at very least, take in full specs of human brain and then spit out the understanding of how to trade off the extreme pain of that individual, for dust speck of that individual (at task which may well end up in ultra long computations BusyBeaver(1E10) style. Forget the puny up arrow). That’s an enormously huge problem which the torture-choosers obviously a: haven’t done and b: didn’t even comprehend that something like this would be needed. Which brings us to the final point: the utilitarians are the people whom haven’t slightest clue what it might take to make an utilitarian decision, but are unaware of that deficiency. edit: and also, I would likely take 1/3^^^3 chance of torture over a dust speck. Why? Because dust speck may result in an accident leading up to decades of torturous existence. Dust speck’s own value is still non comparable, it only bothers me because it creates the risk.
edit: note, the busy beaver reference is just an example. Before you can be additively operating on dust specks and pain, and start doing some utilitarian math there, you have to at least understand how the hell is it that an algorithm can be feeling pain, what is the pain exactly (in reductionist terms).
and also, I would likely take 1/3^^^3 chance of torture over a dust speck. Why? Because dust speck may result in an accident leading up to decades of torturous existence
IIRC, in the original torture vs specks post EY specified that none of the dust specks would have any long-term consequence.
I know. Just wanted to point out where the personal preference (easily demonstrable when people e.g. neglect to take inconvenient safety measures) of small chance of torture vs definite dust speck comes from.
Well, those examples would have a lot of “okay we can’t calculate utility here, so we’ll use a principle” and far less faith in direct utilitarianism.
With the torture and dust specks, see, it arrives at counter intuitive conclusion, but it is not proof grade reasoning by any means. Who knows, maybe the correct algorithm for evaluation of torture vs dust specks must have BusyBeaver(10) for the torture, and BusyBeaver(9) for dust specks, or something equally outrageously huge (after all, thought, which is being screwed with by torture, is turing-complete). The 3^^^3 is not a very big number. There are numbers which are big like you wouldn’t believe.
edit: also I think even vastly superhuman entities wouldn’t be very good at consequence evaluation, especially from uncertain start state. In any case, some sorta morality oracle would have to be able to, at very least, take in full specs of human brain and then spit out the understanding of how to trade off the extreme pain of that individual, for dust speck of that individual (at task which may well end up in ultra long computations BusyBeaver(1E10) style. Forget the puny up arrow). That’s an enormously huge problem which the torture-choosers obviously a: haven’t done and b: didn’t even comprehend that something like this would be needed. Which brings us to the final point: the utilitarians are the people whom haven’t slightest clue what it might take to make an utilitarian decision, but are unaware of that deficiency. edit: and also, I would likely take 1/3^^^3 chance of torture over a dust speck. Why? Because dust speck may result in an accident leading up to decades of torturous existence. Dust speck’s own value is still non comparable, it only bothers me because it creates the risk.
edit: note, the busy beaver reference is just an example. Before you can be additively operating on dust specks and pain, and start doing some utilitarian math there, you have to at least understand how the hell is it that an algorithm can be feeling pain, what is the pain exactly (in reductionist terms).
IIRC, in the original torture vs specks post EY specified that none of the dust specks would have any long-term consequence.
I know. Just wanted to point out where the personal preference (easily demonstrable when people e.g. neglect to take inconvenient safety measures) of small chance of torture vs definite dust speck comes from.