Why? These guys think things are going to be fine. You should raise your probability estimate that humanity will survive the next century. This is great news!
Why? These guys think things are going to be fine. You should raise your probability estimate that humanity will survive the next century. This is great news!
Or, if you have reason to believe that things are not going to be fine it may be appropriate to lower your estimate that humanity will survive the next century. People not being aware (or denying) threats are less likely to do what is necessary to prevent them. If we accept XiXidu’s implied premise that these guys are particularly relevant then their belief that things are fine is an existential risk.
(It happens that I don’t accept the premise. Narrow AI is a completely different subject to GAI and experts are notorious for overestimating the extent that their expertise applies to loosely related areas.)
If we accept XiXidu’s implied premise that these guys are particularly relevant then their belief that things are fine is an existential risk.
How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.
Also my premise is mainly that those people are influential. After all they have students, coworkers and friends with whom they might talk about risks from AI. One of them might actually become interested and get involved. And I can tell you that I am in contact with one professor who told me that this is important and that he’ll now research risks from AI.
You might also tell me who you think is important and I will write them.
How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.
I’m not questioning the value of writing to a broad range of people, or your initiative. I’m just discounting the authority of narrow AI experts on GAI—two different fields, the names of which are misleadingly similar. In this case the discount means that our estimate of existential risk need not increase too much. If Pat was a respected and influential GAI researcher it would be a far, far scarier indicator!
Or, if you have reason to believe that things are not going to be fine it may be appropriate to lower your estimate that humanity will survive the next century
Okay, but this seems to violate conservation of expected evidence. Either you can be depressed by the answer “we’re all going to die” or, less plausibly, by the answer “Everything is going to be fine”, but not both.
Okay, but this seems to violate conservation of expected evidence.
No it doesn’t.
Either you can be depressed by the answer “we’re all going to die” or, less plausibly, by the answer “Everything is going to be fine”, but not both.
I only suggested the latter, never the former. I’d be encouraged if the AI researchers acknowledged more risk. (Only slightly given the lack of importance I have ascribed to these individuals elsewhere.)
Even with reasonable probabilities, it was pretty clear that Hayes was completely missing the point on a few questions; and if the other two had answered with the length and clarity he did, their point-missing might have been similarly clear.
Why? These guys think things are going to be fine. You should raise your probability estimate that humanity will survive the next century. This is great news!
Or, if you have reason to believe that things are not going to be fine it may be appropriate to lower your estimate that humanity will survive the next century. People not being aware (or denying) threats are less likely to do what is necessary to prevent them. If we accept XiXidu’s implied premise that these guys are particularly relevant then their belief that things are fine is an existential risk.
(It happens that I don’t accept the premise. Narrow AI is a completely different subject to GAI and experts are notorious for overestimating the extent that their expertise applies to loosely related areas.)
How do you know who is going to have the one important insight that leads to a dangerous advance? If I write everyone then they have at least heard of risks from AI and maybe think twice when they notice something dramatic.
Also my premise is mainly that those people are influential. After all they have students, coworkers and friends with whom they might talk about risks from AI. One of them might actually become interested and get involved. And I can tell you that I am in contact with one professor who told me that this is important and that he’ll now research risks from AI.
You might also tell me who you think is important and I will write them.
I’m not questioning the value of writing to a broad range of people, or your initiative. I’m just discounting the authority of narrow AI experts on GAI—two different fields, the names of which are misleadingly similar. In this case the discount means that our estimate of existential risk need not increase too much. If Pat was a respected and influential GAI researcher it would be a far, far scarier indicator!
Okay, but this seems to violate conservation of expected evidence. Either you can be depressed by the answer “we’re all going to die” or, less plausibly, by the answer “Everything is going to be fine”, but not both.
No it doesn’t.
I only suggested the latter, never the former. I’d be encouraged if the AI researchers acknowledged more risk. (Only slightly given the lack of importance I have ascribed to these individuals elsewhere.)
If only they hadn’t used such low probabilities—I could almost have believed them.
Even with reasonable probabilities, it was pretty clear that Hayes was completely missing the point on a few questions; and if the other two had answered with the length and clarity he did, their point-missing might have been similarly clear.
Sure, but if it was easier for me to not notice them missing the point I might have been able to update more towards no UFAI.