I’m not sure I disagree, but I’m also not sure that dying is a necessity. We don’t understand physics yet, much less consciousness; it’s too early to assume it as a certainty, which means I have a significantly nonzero confidence of life being an infinite good.
There are all kinds of mathematical tricks to deal with infinite quantities. Renormalization is something you’d be familiar with from physics; from my own CS background, I’ve got asymptotic analysis (which can’t see the fine details, but easily can handle large ones). Even something as simple as taking the derivative of your utility function would often be enough to tell which alternative is best.
I’ve also got a significantly nonzero confidence of infinite negative utility, mind you. Life isn’t all roses.
We already donate based on the assumption that superhuman AI is possible and that it is right to base our decisions on extrapolated utility of it and a possible galactic civilisation. Why are we not able to make decisions based on a more evidence based economic and physical assumption of a universe that is unable to sustain a galactic civilisation for most of its lifespan and the extrapolated suffering that is a conclusion of this prediction?
What kind of decisions were you planning to take? You surely wouldn’t want to make a “friendly AI” that’s hardcoded to wipe out humanity; you’d expect it to come to the conclusion that that’s the best option by itself, based on CEV. I’d want it to explain its reasoning in detail, but I might even go along with that.
My argument is that it’s too early to take any decisions at all. We’re still in the data collection phase, and the state of reality is such that I wouldn’t trust anything but a superintelligence to be right about the consequences of our various options anyway.
We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.
We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.
True, I have to read up on CEV and see if there was a possibility that a friendly AI could decide to kill us all to reduce suffering in the long-term.
The whole idea in the OP stems from the kind of negative utilitarianism that sgggests that it is not worth to torture 100 people infinitel to make billions happy. So I thought to extrapolate this and see what if we figure out that in the long run most entities will be suffering?
Negative utilitarianism is.. interesting, but I’m pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong?
That’s not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.
Ah, I get it now, you believe that all life is necessarily a net negative. That existing is less of a good than dying is of a bad.
I disagree, and I suspect almost everyone else here does too. You’ll have to provide some justification for that belief if you wish us to adopt it.
I’m not sure I disagree, but I’m also not sure that dying is a necessity. We don’t understand physics yet, much less consciousness; it’s too early to assume it as a certainty, which means I have a significantly nonzero confidence of life being an infinite good.
Doesn’t that make most expected utility calculations make no sense?
A problem with the math, not with reality.
There are all kinds of mathematical tricks to deal with infinite quantities. Renormalization is something you’d be familiar with from physics; from my own CS background, I’ve got asymptotic analysis (which can’t see the fine details, but easily can handle large ones). Even something as simple as taking the derivative of your utility function would often be enough to tell which alternative is best.
I’ve also got a significantly nonzero confidence of infinite negative utility, mind you. Life isn’t all roses.
We already donate based on the assumption that superhuman AI is possible and that it is right to base our decisions on extrapolated utility of it and a possible galactic civilisation. Why are we not able to make decisions based on a more evidence based economic and physical assumption of a universe that is unable to sustain a galactic civilisation for most of its lifespan and the extrapolated suffering that is a conclusion of this prediction?
Well, first off..
What kind of decisions were you planning to take? You surely wouldn’t want to make a “friendly AI” that’s hardcoded to wipe out humanity; you’d expect it to come to the conclusion that that’s the best option by itself, based on CEV. I’d want it to explain its reasoning in detail, but I might even go along with that.
My argument is that it’s too early to take any decisions at all. We’re still in the data collection phase, and the state of reality is such that I wouldn’t trust anything but a superintelligence to be right about the consequences of our various options anyway.
We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.
True, I have to read up on CEV and see if there was a possibility that a friendly AI could decide to kill us all to reduce suffering in the long-term.
The whole idea in the OP stems from the kind of negative utilitarianism that sgggests that it is not worth to torture 100 people infinitel to make billions happy. So I thought to extrapolate this and see what if we figure out that in the long run most entities will be suffering?
Negative utilitarianism is.. interesting, but I’m pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong?
That’s not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.