Most such ‘experts’ have knowledge and metis in how to do engineering with machine learning, not in predicting the outcomes of future scientific insights that may or may not happen, especially when asked about questions like ‘is this research going to cause an event whose measured impacts will be larger in scope than the industrial revolution’. I don’t believe that there are relevant experts, nor that I should straightforwardly defer to the body of people with a related-but-distinct topic of expertise.
Often there are many epistemic corruptions within large institutions of people; they can easily be borked in substantive ways that sometimes allow me to believe that they’re mistaken and untrustworthy on some question. I am not saying this is true for all questions, but my sense is that most ML people are operating in far mode when thinking about the future of AGI and that a lot of the strings and floats they output when prompted are not very related to reality.
Most such ‘experts’ have knowledge and metis in how to do engineering with machine learning, not in predicting the outcomes of future scientific insights that may or may not happen, especially when asked about questions like ‘is this research going to cause an event whose measured impacts will be larger in scope than the industrial revolution’. I don’t believe that there are relevant experts, nor that I should straightforwardly defer to the body of people with a related-but-distinct topic of expertise.
Often there are many epistemic corruptions within large institutions of people; they can easily be borked in substantive ways that sometimes allow me to believe that they’re mistaken and untrustworthy on some question. I am not saying this is true for all questions, but my sense is that most ML people are operating in far mode when thinking about the future of AGI and that a lot of the strings and floats they output when prompted are not very related to reality.