The social sciences are sciences; AI predictions are mainly speculative thinking by people who just put on their thinking caps and think really really hard about the future (see some of the examples in http://lesswrong.com/lw/e79/ai_timeline_prediction_data/).
Are you saying that these predictions are unscientific because they are based on untestable models? Or because the models are testable for “small” predictions, but the AI predictions based on them are wild extrapolations beyond the models’ validity?
It does sound pretty bad if that’s the case. My suspicion is that the models are there, just implicit and poor-quality. Maybe trying to explicate, compare and critique them would be worthwhile.
The social sciences are sciences; AI predictions are mainly speculative thinking by people who just put on their thinking caps and think really really hard about the future (see some of the examples in http://lesswrong.com/lw/e79/ai_timeline_prediction_data/).
Are you saying that these predictions are unscientific because they are based on untestable models? Or because the models are testable for “small” predictions, but the AI predictions based on them are wild extrapolations beyond the models’ validity?
Most predictions don’t use models; most models aren’t tested; and AI predictions based on tested models are generally wild extrapolations.
It does sound pretty bad if that’s the case. My suspicion is that the models are there, just implicit and poor-quality. Maybe trying to explicate, compare and critique them would be worthwhile.