But Eliezer Yudkowsky could tell me anything about AI and I would have no way to tell if he was right or not even wrong.
You know, one upside of logic is that, if someone tells you proposition x is true, gives you the data, and shows their steps of reasoning, you can tell whether they’re lying or not. I’m not a hundred percent onboard with Yudkowsky’s AI risk views, but I can at least tell that his line of reasoning is correct as far as it goes. He may be making some unjustified assumptions about AI architecture, but he’s not wrong about there being a threat. If he’s making a mistake of logic, it’s not one I can find. A big, big chunk of mindspace is hostile-by-default.
You know, one upside of logic is that, if someone tells you proposition x is true, gives you the data, and shows their steps of reasoning, you can tell whether they’re lying or not. I’m not a hundred percent onboard with Yudkowsky’s AI risk views, but I can at least tell that his line of reasoning is correct as far as it goes. He may be making some unjustified assumptions about AI architecture, but he’s not wrong about there being a threat. If he’s making a mistake of logic, it’s not one I can find. A big, big chunk of mindspace is hostile-by-default.