I am using IT character from that “IT” movie as a metaphor for describing a problem in AGI research. AI can potentially learn (or have already learnt) about all our irresistible pleasures and agonizing fears, similar to what IT could do in the film.
AGI initially “knows” or can deduct from only what we provide to it. Ideally, we want AGI to know as much information as needed (no more, no less), so it can provide us with the most relevant inferences to inform our decisions. Importantly, we don’t want to provide AGI with all the information we have, as we want to preserve privacy and autonomy.
Our goals and values are not stable and change after new information is received. Here, I mean different levels of “our” — goals and values of individuals, groups, organizations, nations and the society as a whole.
Question: Can we already understand “what we truly want” as individuals, groups and society, and what is best for us, by extracting the most relevant data we have and using appropriate machine learning tools? The same question goes for “what we truly fear” and what is actually the worst for us —is it possible to already extract from the data?
[Question] What if AI is “IT”, and we don’t know about this?
I am using IT character from that “IT” movie as a metaphor for describing a problem in AGI research. AI can potentially learn (or have already learnt) about all our irresistible pleasures and agonizing fears, similar to what IT could do in the film.
AGI initially “knows” or can deduct from only what we provide to it. Ideally, we want AGI to know as much information as needed (no more, no less), so it can provide us with the most relevant inferences to inform our decisions. Importantly, we don’t want to provide AGI with all the information we have, as we want to preserve privacy and autonomy.
Our goals and values are not stable and change after new information is received. Here, I mean different levels of “our” — goals and values of individuals, groups, organizations, nations and the society as a whole.
Question: Can we already understand “what we truly want” as individuals, groups and society, and what is best for us, by extracting the most relevant data we have and using appropriate machine learning tools? The same question goes for “what we truly fear” and what is actually the worst for us —is it possible to already extract from the data?