In my opinion, the most relevant article was from Drew McDermott, and I’m surprised that such an emphasis on analyzing the computational complexity of approaches to ‘friendliness’ and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singularity.
I’m thinking of specific concepts by Yudkowsky and others in the singularity/FAI crowd that seem uncontroversial at first glance, but upon further investigation, when analyzed in the light of computational complexity, become unconvincing. One example of this is the concept of the possibility space of minds that is an assumption propping up many of the arguments for the negative consequences of careless AI engineering. When seen from the perspective of computability, that possibility space does represent the landscape of theoretically possible intelligent agents, and at first glance, those sensitive and wise enough to care about where in that landscape most outcomes of successful AI engineering projects will be located are alarmed at the needle in the haystack that is our target for a positive outcome. But, if you put on your computational complexity hat and start to analyze not just particular algorithms representing AI systems themselves, but the engineering processes that work towards outputting those AI agents/systems, a very different landscape takes shape, one that drastically constrains the space of possible minds that are a.) of a comparable cognitive class with humans, and b.) have a feasible engineering approach on a timescale T < heat death of our universe. I’m including the evolution of natural history on earth within the set of engineering processes that output intelligence mentioned above
This is but one example of how the neglect of computational complexity, and, to be frank, the neglect of time as a very important factor overall, has influenced the thinking of the SIAI/Lesswrong et al crowd. This neglect leads to statements such as the one Yudkowsky made that an AI could be programmed on a desktop computer circa early 2000s which I am extremely incredulous of. It also leads to timeless decision theories which I don’t feel will be of much importance. Scott Aaronson has made a career out of stressing computational complexity for understanding the deep nature of quantum mechanics, and this should apply to all natural phenomena, cognition and AI among them.
In my opinion, the most relevant article was from Drew McDermott, and I’m surprised that such an emphasis on analyzing the computational complexity of approaches to ‘friendliness’ and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singularity.
I’m thinking of specific concepts by Yudkowsky and others in the singularity/FAI crowd that seem uncontroversial at first glance, but upon further investigation, when analyzed in the light of computational complexity, become unconvincing. One example of this is the concept of the possibility space of minds that is an assumption propping up many of the arguments for the negative consequences of careless AI engineering. When seen from the perspective of computability, that possibility space does represent the landscape of theoretically possible intelligent agents, and at first glance, those sensitive and wise enough to care about where in that landscape most outcomes of successful AI engineering projects will be located are alarmed at the needle in the haystack that is our target for a positive outcome. But, if you put on your computational complexity hat and start to analyze not just particular algorithms representing AI systems themselves, but the engineering processes that work towards outputting those AI agents/systems, a very different landscape takes shape, one that drastically constrains the space of possible minds that are a.) of a comparable cognitive class with humans, and b.) have a feasible engineering approach on a timescale T < heat death of our universe. I’m including the evolution of natural history on earth within the set of engineering processes that output intelligence mentioned above
This is but one example of how the neglect of computational complexity, and, to be frank, the neglect of time as a very important factor overall, has influenced the thinking of the SIAI/Lesswrong et al crowd. This neglect leads to statements such as the one Yudkowsky made that an AI could be programmed on a desktop computer circa early 2000s which I am extremely incredulous of. It also leads to timeless decision theories which I don’t feel will be of much importance. Scott Aaronson has made a career out of stressing computational complexity for understanding the deep nature of quantum mechanics, and this should apply to all natural phenomena, cognition and AI among them.