Another aspect of where I’m coming from is that there should be a high standard of proof for claiming that something is an important technical problem in future AI development because it seems so hard to predict what will and won’t be relevant for distant future technologies.
On the other hand, trying to solve many things that have a significant probability of being important so that you’re likely to eventually solve something that actually is important as a result, seems like a better idea than not doing anything because you can’t prove that any particular sub-problem is important.
I agree with this principle but think my claims are consistent with it. Doing stuff other than “technical problems in the future of AI” is an alternative worth considering.
On the other hand, trying to solve many things that have a significant probability of being important so that you’re likely to eventually solve something that actually is important as a result, seems like a better idea than not doing anything because you can’t prove that any particular sub-problem is important.
I agree with this principle but think my claims are consistent with it. Doing stuff other than “technical problems in the future of AI” is an alternative worth considering.