The most general lesson is perhaps on the complexity of language and the danger of using human-understandable
informal concepts in the field of AI. The Dartmouth group seemed convinced that because they informally understood
certain concepts and could begin to capture some of this understanding in a formal model, then it must be possible to
capture all this understanding in a formal model. In this, they were wrong.
I think this is a great lesson to draw. I think another lesson is that Dartmouth folks either haven’t noticed or thought they could get around the fact that much of what they are trying to do is covered by statistics, and statistics is difficult. In fact, there turned out to be no royal road for learning from data.
Here’s my attempt to translate these lessons for folks who worry about foom:
(a) Taboo informal discussions of powerful AI and/or implications of such. If you can’t discuss it in math terms, it’s probably not worth discussing.
(b) Pay attention to where related fields are stuck. If e.g. coordination problems are hard, or getting optimization processes (corps, governments, etc.) to do what we want is hard, this is food for thought as far as getting a constructed optimization process to do what we want.
Here’s my attempt to translate these lessons for folks who worry about foom:
(a) Taboo informal discussions of powerful AI and/or implications of such. If you can’t discuss it in math terms, it’s probably not worth discussing.
I’m not sure how this follows from the previous lesson. Analysing the impact of a new technology seems mostly distinct from the research needed to develop it.
For example, suppose somebody looked at progress in chemistry and declared that soon the dreams of alchemy will be realized and we’d be able to easily synthesize any element we wanted out of any other. I’d call this a similar error to the one made by the Dartmouth group, but I don’t think it then follows that we can’t discuss what the impacts would be of being able to easily synthesize any element out of any other.
It might be good advice nonetheless, but I don’t think it follows from the lesson.
I think this is a great lesson to draw. I think another lesson is that Dartmouth folks either haven’t noticed or thought they could get around the fact that much of what they are trying to do is covered by statistics, and statistics is difficult. In fact, there turned out to be no royal road for learning from data.
Here’s my attempt to translate these lessons for folks who worry about foom:
(a) Taboo informal discussions of powerful AI and/or implications of such. If you can’t discuss it in math terms, it’s probably not worth discussing.
(b) Pay attention to where related fields are stuck. If e.g. coordination problems are hard, or getting optimization processes (corps, governments, etc.) to do what we want is hard, this is food for thought as far as getting a constructed optimization process to do what we want.
I’d add “initial progress in a field does not give a good baseline for estimating ultimate success”.
I’m not sure how this follows from the previous lesson. Analysing the impact of a new technology seems mostly distinct from the research needed to develop it.
For example, suppose somebody looked at progress in chemistry and declared that soon the dreams of alchemy will be realized and we’d be able to easily synthesize any element we wanted out of any other. I’d call this a similar error to the one made by the Dartmouth group, but I don’t think it then follows that we can’t discuss what the impacts would be of being able to easily synthesize any element out of any other.
It might be good advice nonetheless, but I don’t think it follows from the lesson.