Show them evidence that is inconsistent with their world view?
That a piece of evidence is consistent or inconsistent with their world view relies on arguments. Remember, standard practice among pundits is to observe evidence, then fit it to their theory, rather than using theory to predict evidence, observing evidence, and then updating. If someone is in the first mode, where’s the step where they notice that they made a wrong prediction?
Show them how with your view of the world they can predict the world better.
Relatedly, that predictive accuracy is the thing to optimize for relies on arguments.
Pundits are probably not worth bothering with. But I think there are hardcore engineers that would be useful to convince.
I think that Andrew Ng probably optimizes for predictive accuracy (at least he has to whilst creating machine learning systems).
This was his answer to whether AI is an existential threat here. I don’t know why he objects to this line of thought, but the things I suggested that could be done above would be useful in his case.
AI has made tremendous progress, and I’m wildly optimistic about building a better society that is embedded up and down with machine intelligence. But AI today is still very limited. Almost all the economic and social value of deep learning is still through supervised learning, which is limited by the amount of suitably formatted (i.e., labeled) data. Even though AI is helping hundreds of millions of people already, and is well poised to help hundreds of millions more, I don’t see any realistic path to AI threatening humanity.
If the theories from MIRI about AI can help him make better machine learning systems, I think he would take note.
I think the fact that some of the famous people what people think of AI now are not the same people as the ones warning about the dangers is a red flag for people.
But I think there are hardcore engineers that would be useful to convince.
Sure, because it would be nice if there were 0 instead of 2 prominent ML experts who were unconvinced. But 2 people is not a consensus, and the actual difference of opinion between Ng, LeCun, and everyone else is very small, mostly dealing with emphasis instead of content.
From a surivey linked from that article (that that article cherry-picks a single number from… sigh). It looks like there is a disconnect between theorists and practitioners with theorists being more likely to believe in hard take off (theorists think we have a 15% chance likely that we will get super intelligence within 2 years of human intelligence and practitioners a 5%).
I think you would find nuclear physicists giving a higher probability in the idea of chain reactions pretty quickly once a realistic pathway that released 2 neutrons was shown.
mostly dealing with emphasis instead of content.
MIRI/FHI has captured the market for worrying about AI. If they are worrying about the wrong things, that could be pretty bad.
Show them evidence that is inconsistent with their world view? Show them how with your view of the world they can predict the world better.
Otherwise you are expecting people to get on board with a abstract philosophical argument. Which I think people are inured against.
That a piece of evidence is consistent or inconsistent with their world view relies on arguments. Remember, standard practice among pundits is to observe evidence, then fit it to their theory, rather than using theory to predict evidence, observing evidence, and then updating. If someone is in the first mode, where’s the step where they notice that they made a wrong prediction?
Relatedly, that predictive accuracy is the thing to optimize for relies on arguments.
Pundits are probably not worth bothering with. But I think there are hardcore engineers that would be useful to convince.
I think that Andrew Ng probably optimizes for predictive accuracy (at least he has to whilst creating machine learning systems).
This was his answer to whether AI is an existential threat here. I don’t know why he objects to this line of thought, but the things I suggested that could be done above would be useful in his case.
If the theories from MIRI about AI can help him make better machine learning systems, I think he would take note.
I think the fact that some of the famous people what people think of AI now are not the same people as the ones warning about the dangers is a red flag for people.
Sure, because it would be nice if there were 0 instead of 2 prominent ML experts who were unconvinced. But 2 people is not a consensus, and the actual difference of opinion between Ng, LeCun, and everyone else is very small, mostly dealing with emphasis instead of content.
From a surivey linked from that article (that that article cherry-picks a single number from… sigh). It looks like there is a disconnect between theorists and practitioners with theorists being more likely to believe in hard take off (theorists think we have a 15% chance likely that we will get super intelligence within 2 years of human intelligence and practitioners a 5%).
I think you would find nuclear physicists giving a higher probability in the idea of chain reactions pretty quickly once a realistic pathway that released 2 neutrons was shown.
MIRI/FHI has captured the market for worrying about AI. If they are worrying about the wrong things, that could be pretty bad.