I feel like people like Scott Aaronson who are demanding a specific scenario for how AI will actually kill us all… I hypothesize that most scenarios with vastly superhuman AI systems coexisting with humans end in the disempowerment of humans and either human extinction or some form of imprisonment or captivity akin to factory farming
Aaronson in that quote is “demanding a specific scenario” for how GPT-4.5 or GPT-5 in particular will kill us all. Do you believe they will be vastly superhuman?
Aaronson in that quote is “demanding a specific scenario” for how GPT-4.5 or GPT-5 in particular will kill us all. Do you believe they will be vastly superhuman?