The development of advanced AI increases the risk of human extinction (by a non-trivial amount, e.g. 1%)
This is where I call BS. Even the best calibrated people are not accurate at the margins. They probably cannot tell 1% from 0.1%. The rest of us can’t reliably tell 1% from 0.00001% or from 10%. If you are in doubt, ask those who self-calibrate all the time and are good at it (Eliezer? Scott? Anna? gwern?) how accurate their 1% predictions are.
Also notice your motivated cognition. You are not trying to figure out whether your views are justified, but how to convince those ignorant others that your views are correct.
I don’t think your last paragraph is fair; doing outreach / advocacy, and discussing it, is not particularly related to motivated cognition. You don’t know how much time capybaralet has spent trying to figure out whether their views are justified; you’re not going to get a whole life story in an 800-word blog post.
There is such a thing as talking to an ideological opponent who has spent no time thinking about a topic and has a dumb opinion that could not survive 5 seconds of careful thought. We should still be good listeners, not be condescending, etc., because that’s just the right way to talk to people; but realistically we’re probably not going to learn anything new (about this specific topic) from such a person, let alone change our own minds (assuming we’ve already deeply engaged with both sides of the issue).
On the other hand, when talking to an ideological opponent who has spent a lot of time thinking about an issue, we may indeed learn something or change our mind, and I’m all for being genuinely open-minded and seeking out and thinking hard about such opinions. But I think that’s not the main topic of this little blog post.
Identify a small set of beliefs to focus discussions around.
Figure out how to make the case for these beliefs quickly, clearly, persuasively, and honestly.
And yes, I did mean >1%, but I just put that number there to give people a sense of what I mean, since “non-trivial” can mean very different things to different people.
That number was presented as an example (“e.g.”) - but more importantly, all the numbers in the range you offer here would argue for more AI alignment research! What we need to establish, naively, is that the probability is not super-exponentially low for a choice between ‘inter-galactic civilization’ and ‘extinction of humanity within a century’. That seems easy enough if we can show that nothing in the claim contradicts established knowledge.
I would argue the probability for this choice existing is far in excess of 50%. As examples of background info supporting this: Bayesianism implies that “narrow AI” designs should be compatible on some level; we know the human brain resulted from a series of kludges; and the superior number of neurons within an elephant’s brain is not strictly required for taking over the world. However, that argument is not logically necessary.
(Technically you’d have to deal with Pascal’s Mugging. However, I like Hansonian adjustment as a solution, and e.g. I doubt an adult civilization would deceive its people about the nature of the world.)
This is where I call BS. Even the best calibrated people are not accurate at the margins. They probably cannot tell 1% from 0.1%. The rest of us can’t reliably tell 1% from 0.00001% or from 10%. If you are in doubt, ask those who self-calibrate all the time and are good at it (Eliezer? Scott? Anna? gwern?) how accurate their 1% predictions are.
Also notice your motivated cognition. You are not trying to figure out whether your views are justified, but how to convince those ignorant others that your views are correct.
I think capybaralet meant ≥1%.
I don’t think your last paragraph is fair; doing outreach / advocacy, and discussing it, is not particularly related to motivated cognition. You don’t know how much time capybaralet has spent trying to figure out whether their views are justified; you’re not going to get a whole life story in an 800-word blog post.
There is such a thing as talking to an ideological opponent who has spent no time thinking about a topic and has a dumb opinion that could not survive 5 seconds of careful thought. We should still be good listeners, not be condescending, etc., because that’s just the right way to talk to people; but realistically we’re probably not going to learn anything new (about this specific topic) from such a person, let alone change our own minds (assuming we’ve already deeply engaged with both sides of the issue).
On the other hand, when talking to an ideological opponent who has spent a lot of time thinking about an issue, we may indeed learn something or change our mind, and I’m all for being genuinely open-minded and seeking out and thinking hard about such opinions. But I think that’s not the main topic of this little blog post.
No, my goal is to:
Identify a small set of beliefs to focus discussions around.
Figure out how to make the case for these beliefs quickly, clearly, persuasively, and honestly.
And yes, I did mean >1%, but I just put that number there to give people a sense of what I mean, since “non-trivial” can mean very different things to different people.
That number was presented as an example (“e.g.”) - but more importantly, all the numbers in the range you offer here would argue for more AI alignment research! What we need to establish, naively, is that the probability is not super-exponentially low for a choice between ‘inter-galactic civilization’ and ‘extinction of humanity within a century’. That seems easy enough if we can show that nothing in the claim contradicts established knowledge.
I would argue the probability for this choice existing is far in excess of 50%. As examples of background info supporting this: Bayesianism implies that “narrow AI” designs should be compatible on some level; we know the human brain resulted from a series of kludges; and the superior number of neurons within an elephant’s brain is not strictly required for taking over the world. However, that argument is not logically necessary.
(Technically you’d have to deal with Pascal’s Mugging. However, I like Hansonian adjustment as a solution, and e.g. I doubt an adult civilization would deceive its people about the nature of the world.)