We can decide that such a superintelligence is right to create, yes. But having decided that, it makes an awful lot of sense to punt most other decisions over to it.
True, I have to read up on CEV and see if there was a possibility that a friendly AI could decide to kill us all to reduce suffering in the long-term.
The whole idea in the OP stems from the kind of negative utilitarianism that sgggests that it is not worth to torture 100 people infinitel to make billions happy. So I thought to extrapolate this and see what if we figure out that in the long run most entities will be suffering?
Negative utilitarianism is.. interesting, but I’m pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong?
That’s not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.
True, I have to read up on CEV and see if there was a possibility that a friendly AI could decide to kill us all to reduce suffering in the long-term.
The whole idea in the OP stems from the kind of negative utilitarianism that sgggests that it is not worth to torture 100 people infinitel to make billions happy. So I thought to extrapolate this and see what if we figure out that in the long run most entities will be suffering?
Negative utilitarianism is.. interesting, but I’m pretty sure it holds an immediate requirement to collectively commit suicide no matter what (short of continued existence, inevitably(?) ended by death, possibly being less bad than suicide, which seems unlikely) - am I wrong?
That’s not at all similar to your scenario, which holds the much more reasonable assumption that the future might be a net negative even while counting the positives.