I do have other issues with the security mindset, but that is an important issue I had.
Turning to this part though, I think I might see where I disagree:
Reversing biases of public perception isn’t much use for sorting out correctness of arguments.
It’s not just public perception, but also the very researchers are biased to believe that danger is or will happen. Critically, since this is asymmetrical, it means that this has more implications for doomy people than for optimistic people.
It’s why I’m a priori a bit skeptical of AI doom, and it’s also why it’s consistent to believe that the real probability of doom is very low, almost arbitrarily low, while people think the probability of doom is quite high: You don’t pay attention to the not doom or the things that went right, only the things that went wrong.
Yes, that’s true, but I have more evidence than that, and in particular I have evidence that directly argues against the proposition of AI doom, and that a lot of common arguments for AI doom.
The researchers aren’t the arguments, but the properties of the researchers looking into the arguments, especially the way they’re biased, does provide some evidence for certain proposition.
I do have other issues with the security mindset, but that is an important issue I had.
Turning to this part though, I think I might see where I disagree:
It’s not just public perception, but also the very researchers are biased to believe that danger is or will happen. Critically, since this is asymmetrical, it means that this has more implications for doomy people than for optimistic people.
It’s why I’m a priori a bit skeptical of AI doom, and it’s also why it’s consistent to believe that the real probability of doom is very low, almost arbitrarily low, while people think the probability of doom is quite high: You don’t pay attention to the not doom or the things that went right, only the things that went wrong.
The researchers are not the arguments. You are discussing correctness of researchers.
Yes, that’s true, but I have more evidence than that, and in particular I have evidence that directly argues against the proposition of AI doom, and that a lot of common arguments for AI doom.
The researchers aren’t the arguments, but the properties of the researchers looking into the arguments, especially the way they’re biased, does provide some evidence for certain proposition.