There’s this, which doesn’t seem to depend on utilitarian or transhumanist arguments:
Ajeya Cotra’s Biological Anchors report estimates a 10% chance of transformative AI by 2031, and a 50% chance by 2052. Others (eg Eliezer Yudkowsky) think it might happen even sooner.
Let me rephrase this in a deliberately inflammatory way: if you’re under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even [just] your children. You and everyone you know.
There’s this, which doesn’t seem to depend on utilitarian or transhumanist arguments: