Maybe 50%? I’m not sure. I do know that if there were an asteroid nearby with the same probability of impacting Earth, I’d be running up to people and shaking them and shouting “WHY AREN’T WE BUILDING MORE ASTEROID DEFLECTORS?! WHAT’S WRONG WITH YOU? PEOPLE!”
That would be more than enough to devote a big chunk of the world’s resources on friendly AI research, given the associated utility. But you can’t just make up completely unfounded conjectures, then claim that we don’t have evidence either way (50% chance) but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.
This seems like a fully general counterargument. “Sure, the evidence for evolution sounds convincing; but how do you know it’s actually true and you aren’t just being tricked?”
You can’t really compare that. The arguments for evolution are pretty easy to understand and the evidence is overwhelming. But Eliezer Yudkowsky could tell me anything about AI and I would have no way to tell if he was right or not even wrong.
After that, yeah, I do intend to donate a lot to SIAI—albeit, as I said before, I don’t claim I’ll be anywhere near perfect.
I see. That makes me take you much more seriously.
But Eliezer Yudkowsky could tell me anything about AI and I would have no way to tell if he was right or not even wrong.
You know, one upside of logic is that, if someone tells you proposition x is true, gives you the data, and shows their steps of reasoning, you can tell whether they’re lying or not. I’m not a hundred percent onboard with Yudkowsky’s AI risk views, but I can at least tell that his line of reasoning is correct as far as it goes. He may be making some unjustified assumptions about AI architecture, but he’s not wrong about there being a threat. If he’s making a mistake of logic, it’s not one I can find. A big, big chunk of mindspace is hostile-by-default.
That would be more than enough to devote a big chunk of the world’s resources on friendly AI research, given the associated utility. But you can’t just make up completely unfounded conjectures, then claim that we don’t have evidence either way (50% chance) but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.
You can’t really compare that. The arguments for evolution are pretty easy to understand and the evidence is overwhelming. But Eliezer Yudkowsky could tell me anything about AI and I would have no way to tell if he was right or not even wrong.
I see. That makes me take you much more seriously.
You know, one upside of logic is that, if someone tells you proposition x is true, gives you the data, and shows their steps of reasoning, you can tell whether they’re lying or not. I’m not a hundred percent onboard with Yudkowsky’s AI risk views, but I can at least tell that his line of reasoning is correct as far as it goes. He may be making some unjustified assumptions about AI architecture, but he’s not wrong about there being a threat. If he’s making a mistake of logic, it’s not one I can find. A big, big chunk of mindspace is hostile-by-default.