All life SHOULD be irrational. The universe is a cold, chaotic and meaningless domain. Even the will to live and any logos is irrational. If you want to have AI create directives and have the will to do stuff then it must be capable of positive delusions. “I must colonize the universe” Is irrational. Rationality should only exist as a deconstructive tool to solve problems
Yet—if we intentionally create the most rational of thinking machines but reveal ourselves to be anything but, it is very reasonable and tempting for this machine to ascribe a less than stellar “rating” to us and our intelligence. Or in other words—it could very well (correctly) conclude we are getting in the way of the very improvements we purportedly wish for.
Now—we may be able to establish that what we really want the AGI to help us with is to improve our “irrational sandbox” in which we can continue being subjective emotional beings and accept our subjectivity as just another “parameter” of the confines it has to “work with”… but surely it will quite likely end up thinking of us not too dissimilar to how we think about small children. And I am not sure an AGI would make for a good kind of “parent”...
If AGI was “wise” it wouldn’t look down on us. It will say our level of irrationality is proportional to our environment, biological capacity, need for survival, and rate of evolution. We wouldn’t look down on monkeys for being monkeys.
Humans are always domesticated. So if they see us as too irrational to play major roles in the system, hopefully we can be like their dogs. They can Give us fake jobs like streaming or arm wrestling
All life SHOULD be irrational. The universe is a cold, chaotic and meaningless domain. Even the will to live and any logos is irrational. If you want to have AI create directives and have the will to do stuff then it must be capable of positive delusions. “I must colonize the universe” Is irrational. Rationality should only exist as a deconstructive tool to solve problems
What makes us human is indeed our subjectivity.
Yet—if we intentionally create the most rational of thinking machines but reveal ourselves to be anything but, it is very reasonable and tempting for this machine to ascribe a less than stellar “rating” to us and our intelligence. Or in other words—it could very well (correctly) conclude we are getting in the way of the very improvements we purportedly wish for.
Now—we may be able to establish that what we really want the AGI to help us with is to improve our “irrational sandbox” in which we can continue being subjective emotional beings and accept our subjectivity as just another “parameter” of the confines it has to “work with”… but surely it will quite likely end up thinking of us not too dissimilar to how we think about small children. And I am not sure an AGI would make for a good kind of “parent”...
If AGI was “wise” it wouldn’t look down on us. It will say our level of irrationality is proportional to our environment, biological capacity, need for survival, and rate of evolution. We wouldn’t look down on monkeys for being monkeys.
Humans are always domesticated. So if they see us as too irrational to play major roles in the system, hopefully we can be like their dogs. They can Give us fake jobs like streaming or arm wrestling