Eliezer, I’m curious about your reaction to Nick’s statement in the paper about infinities and ethics you linked to, namely “But it is definitely not reasonable to assume that we do not live in a canonically infinite cosmos; and that is all we need here. Any ethical theory that fails to cope with this empirical contingency must be rejected.”
In my line of work, this works out to: “If you don’t know for certain that you don’t live an infinite cosmos, don’t build a Friendly AI that will kersplode if you do.” So yes, I must professionally agree with Nick, though as an infinite set atheist, my life would be a lot simpler if I didn’t have to.
If you have an alternating series which is conditionally but not absolutely convergent, the Riemann series theorem says that reordering its terms can change the result, or force divergence. So you can’t pull a series of bets apart into two series, and expect their sums to equal the sum of the original. But the fact that you assumed you could
I didn’t assume I could. I was complaining about the fact that I couldn’t.
Neel, please justify the statement that a Bayesian must always choose the same option in the infinite case that would dominate if the options were finite. Arntzenius refute this, in what seems to me a fairly rigorous fashion, but you said that they “accepted the Dutch book”—you didn’t say how. You keep insisting that a Bayesian must do this or that, and you are not justifying it, and it seems to me to be simply wrong.
I can have an unbounded utility function, live in an infinite universe, and simply not accept all of McGee’s bets. I could just accept the first 3^^^3 of them, and do pretty well.
I then fail to maximize, yes, but there is no optimal plan in this case, any more than there’s a largest integer. See also Arntzenius et. al’s case of Satan’s Apple.
If you just go on insisting that I, as a Bayesian, am committed to certain actions, without justifying your statement or explaining the flaw in Arntzenius’s math; then I may have to simply write you off as unresponsive, but hopefully not persuading anyone except yourself; unless another commenter indicates that they agree with you, in which case I will accept that your arguments are not so obviously unfinished as they appear. I do have to triage my time here, and it seems to me that you are not responding to my arguments, except to just flatly assert that a Bayesian must do certain things which neither I nor Arntzenius et. al. think a Bayesian must do.
Eliezer, I’m curious about your reaction to Nick’s statement in the paper about infinities and ethics you linked to, namely “But it is definitely not reasonable to assume that we do not live in a canonically infinite cosmos; and that is all we need here. Any ethical theory that fails to cope with this empirical contingency must be rejected.”
In my line of work, this works out to: “If you don’t know for certain that you don’t live an infinite cosmos, don’t build a Friendly AI that will kersplode if you do.” So yes, I must professionally agree with Nick, though as an infinite set atheist, my life would be a lot simpler if I didn’t have to.
If you have an alternating series which is conditionally but not absolutely convergent, the Riemann series theorem says that reordering its terms can change the result, or force divergence. So you can’t pull a series of bets apart into two series, and expect their sums to equal the sum of the original. But the fact that you assumed you could
I didn’t assume I could. I was complaining about the fact that I couldn’t.
Neel, please justify the statement that a Bayesian must always choose the same option in the infinite case that would dominate if the options were finite. Arntzenius refute this, in what seems to me a fairly rigorous fashion, but you said that they “accepted the Dutch book”—you didn’t say how. You keep insisting that a Bayesian must do this or that, and you are not justifying it, and it seems to me to be simply wrong.
I can have an unbounded utility function, live in an infinite universe, and simply not accept all of McGee’s bets. I could just accept the first 3^^^3 of them, and do pretty well.
I then fail to maximize, yes, but there is no optimal plan in this case, any more than there’s a largest integer. See also Arntzenius et. al’s case of Satan’s Apple.
If you just go on insisting that I, as a Bayesian, am committed to certain actions, without justifying your statement or explaining the flaw in Arntzenius’s math; then I may have to simply write you off as unresponsive, but hopefully not persuading anyone except yourself; unless another commenter indicates that they agree with you, in which case I will accept that your arguments are not so obviously unfinished as they appear. I do have to triage my time here, and it seems to me that you are not responding to my arguments, except to just flatly assert that a Bayesian must do certain things which neither I nor Arntzenius et. al. think a Bayesian must do.