I think the chances that something that doesn’t immediately kill humanity, and isn’t actively trying to kill humanity, polishes us off for good is pretty low at the very least.
Humans have survived as hunter gatherers for a million years. We’ve thrived in every possible climate under the sun. We’re not just going to roll over and die because civilisation has collapsed.
Not that this is much of a comfort if 99% of humanity dies.
You can totally have something which is trying to kill humanity in this framework though. Imagine something in the style of chaos-GPT, locally agentic & competent enough to use state-of-the-art AI biotech tools to synthesize dangerous viruses or compounds to release into the atmosphere. (note that In this example the critical part is the narrow-AI biotech tools, not the chaos-agent)
You don’t need solutions to embedded agency, goal-content integrity & the like to build this. It is easier to build and is earlier in the tech-tree than crisp maximizers. It will not be stable enough to coherently take over the lightcone. Just coherent enough to fold some proteins and print them.
I think the chances that something that doesn’t immediately kill humanity, and isn’t actively trying to kill humanity, polishes us off for good is pretty low at the very least.
Humans have survived as hunter gatherers for a million years. We’ve thrived in every possible climate under the sun. We’re not just going to roll over and die because civilisation has collapsed.
Not that this is much of a comfort if 99% of humanity dies.
You can totally have something which is trying to kill humanity in this framework though. Imagine something in the style of chaos-GPT, locally agentic & competent enough to use state-of-the-art AI biotech tools to synthesize dangerous viruses or compounds to release into the atmosphere. (note that In this example the critical part is the narrow-AI biotech tools, not the chaos-agent)
You don’t need solutions to embedded agency, goal-content integrity & the like to build this. It is easier to build and is earlier in the tech-tree than crisp maximizers. It will not be stable enough to coherently take over the lightcone. Just coherent enough to fold some proteins and print them.
But why would anyone do such a stupid thing?