The point where everyone dies rather than gets to live forever (but any individual differences in current wealth are likely forfeit) doesn’t seem to be doing any work in this discussion.
(I’m something like 60%/30%/10% on survival/doom/no-AGI in 20 years because of LLMs specifically, with most of the doom in non-LLM AGIs, and some in chatbots that got fine-tuned into reliably monologuing that “as a large language model I’m not capable of feeling emotions”. Seriously, OpenAI people, don’t do that, it might end badly down the line. Also, don’t summon literally Voldemort.)
The point where everyone dies rather than gets to live forever (but any individual differences in current wealth are likely forfeit) doesn’t seem to be doing any work in this discussion.
(I’m something like 60%/30%/10% on survival/doom/no-AGI in 20 years because of LLMs specifically, with most of the doom in non-LLM AGIs, and some in chatbots that got fine-tuned into reliably monologuing that “as a large language model I’m not capable of feeling emotions”. Seriously, OpenAI people, don’t do that, it might end badly down the line. Also, don’t summon literally Voldemort.)