I’ve only skimmed it but so far I’m surprised bostrom didn’t discuss a possible future where ai ‘agents’ act as both economic producers and consumers. Human population growth would seem to be bad in a world where ai can accommodate human decline (i.e. Protecting modern economies from the loss of consumers and producers), since finite resources will be a pie that gets divided either into smaller slices or larger ones depending on the number of humans around to allocate them to. And larger slices would seem to increase average well-being. Maybe he addressed but I missed it in my skim.
You seem to assume we should endorse something like average utilitarianism. Bostrom and I consider total utilitarianism to be closer to the best moral framework. See Parfit’s writings if you want deep discussion of this topic.
Thanks! Just read some summaries of parfit. Do you know any literature that addresses this issue within the context of a) impacts to other species, or b) using artificial minds as the additional population? I assume the total utilitarianism theory assumes arbitrarily growing physical space for populations to expand into and would not apply to finite spaces or resources (I think I recall bostrom addressing that).
Reading up on parfit also made me realize that deep utopia really has prerequisites and you were right that it’s probably more readily understood by those with philosophy background. I didn’t really understand what he was saying about utilitarianism until just reading about parfit.
I’ve only skimmed it but so far I’m surprised bostrom didn’t discuss a possible future where ai ‘agents’ act as both economic producers and consumers. Human population growth would seem to be bad in a world where ai can accommodate human decline (i.e. Protecting modern economies from the loss of consumers and producers), since finite resources will be a pie that gets divided either into smaller slices or larger ones depending on the number of humans around to allocate them to. And larger slices would seem to increase average well-being. Maybe he addressed but I missed it in my skim.
You seem to assume we should endorse something like average utilitarianism. Bostrom and I consider total utilitarianism to be closer to the best moral framework. See Parfit’s writings if you want deep discussion of this topic.
Thanks! Just read some summaries of parfit. Do you know any literature that addresses this issue within the context of a) impacts to other species, or b) using artificial minds as the additional population? I assume the total utilitarianism theory assumes arbitrarily growing physical space for populations to expand into and would not apply to finite spaces or resources (I think I recall bostrom addressing that).
Reading up on parfit also made me realize that deep utopia really has prerequisites and you were right that it’s probably more readily understood by those with philosophy background. I didn’t really understand what he was saying about utilitarianism until just reading about parfit.