It’s a feature of human cognition that we can make what feel like good arguments for anything.
I would tend to agree, the confirmation bias is profound in humans.
you instead asked “hey, what would AI doom predict about the world, and are those predictions coming true?”
Are suggesting a process such as: assume a future → predict what would be required for this → compare prediction with reality.
Rather than: observe reality → draw conclusions → predict the future
I would tend to agree, the confirmation bias is profound in humans.
Are suggesting a process such as: assume a future → predict what would be required for this → compare prediction with reality.
Rather than: observe reality → draw conclusions → predict the future