It doesn’t matter how safe you are about AI if there’s a million other civilizations in the universe and some non-trivial portion of them aren’t being as careful as they should be.
A UFAI is unlikely to stop at the home planet of the civilization that creates it. Rather you’d expect such a thing to continue to convert the remainder of the universe into computronium to store the integer for it’s fitness function or some similar doomsday scenario.
AI doesn’t work as a filter because it’s the kind of disaster likely to keep spreading and we’d expect to see large parts of the sky going dark as the stars get turned into pictures of smiling faces or computronium.
Which either argues for AI-risk not being so risky or for an early filter causing few civilisations.
It doesn’t matter how safe you are about AI if there’s a million other civilizations in the universe and some non-trivial portion of them aren’t being as careful as they should be.
A UFAI is unlikely to stop at the home planet of the civilization that creates it. Rather you’d expect such a thing to continue to convert the remainder of the universe into computronium to store the integer for it’s fitness function or some similar doomsday scenario.
AI doesn’t work as a filter because it’s the kind of disaster likely to keep spreading and we’d expect to see large parts of the sky going dark as the stars get turned into pictures of smiling faces or computronium.
Which either argues for AI-risk not being so risky or for an early filter causing few civilisations.
That is why I am against premature SETI. But also if AI nanobots spread with near light speed, you can’t see black spots in the sky.