Starting to seriously think about FAI and studying more rigorous system modeling techniques/theories changed my mind. There seems to be very little overlap between wild intuitions of ad-hoc AGI and technical challenges of careful inference/simulation or philosophical issues with formalizing decision theories for intelligence on overdrive.
Some of the intuitions from thinking about ad-hoc seem to carry over, but it’s just that: intuitions, and understanding of approaches to more careful modeling, even if they are applicable only on “toy” applications, gives deeper insight than knowledge of a dozen “real projects”. Intuitions gained from ad-hoc do apply, but only as naive clumsy caricatures.
What changed your mind?
Starting to seriously think about FAI and studying more rigorous system modeling techniques/theories changed my mind. There seems to be very little overlap between wild intuitions of ad-hoc AGI and technical challenges of careful inference/simulation or philosophical issues with formalizing decision theories for intelligence on overdrive.
Some of the intuitions from thinking about ad-hoc seem to carry over, but it’s just that: intuitions, and understanding of approaches to more careful modeling, even if they are applicable only on “toy” applications, gives deeper insight than knowledge of a dozen “real projects”. Intuitions gained from ad-hoc do apply, but only as naive clumsy caricatures.