What defines a human mind? Or a sentient mind in general?
From a computational perspective, a human mind is one of some particular class of complex programs within the overall large space of general intelligences, or minds in general.
The most succinct dilineator of the human sub-category of minds is simply that of having a human ontology. This entails information such as: a large body of embodiment derived knowledge we call common-sense, one or more human languages, memories, beliefs, ideas, values and so on.
Take a human infant, for dramatic example let us use a genetic clone of Einstein. Raise this clone up amongst wild animals and the development result is not anything remotely resembling Einstein, and in fact is not a human mind at all, but something much closer to a primate mind. The brain is the hardware, the mind is software, and the particular mind of Einstein was a unique result of a particular mental developmental history and observation sequence.
If the mind is substrate independent, this begs the question of to what extent it is also algorithm independent. If an AGI has a full human ontology, on what basis can we say that it is not human? If it can make the same inferences on the same knowledgebase, understands one of our languages, has similar memories, ideas, and values, in what way is it not human?
Substrate and algorithm independence show us that it doesn’t really matter in the slightest *how* something thinks internally, all that matter is the end functional behavior, the end decisions.
Surely there are some classes of AGI-designs that would exhibit thought patterns and behaviors well outside the human norm, but these crucially all involve changing some aspect of the knowledge-base. For example, AGI’s based solely on reinforcement learning algorithms would appear to be incapable of abstract model-based value decisions. This would show up as glaring contrast between the AGI’s decisions and it’s linguistically demonstrable understanding of terms such as ‘value’, ‘good’, ‘moral’, and so on. Of course human’s actual decisions are often at odds in a similar fashion, but most humans mostly make important decisions they believe are ‘good’ most of the time.
A reinforcement-learning based AGI with no explicit connection between value concepts such as ‘good’ and it’s reward maximizing utility function would not necessarily be inclined to make ‘good’ decisions. It may even be completely aware of this feature of it’s design and be quite capable of verbalizing it.
But RL techniques are just one particular class of algorithms and we can probably do much better with designs that can form model-based utility functions that actually incorporate high order learned values encoded into the ontology itself.
Such a design would make ‘good’ decisions, and if truly of human or surpassing intelligence, would be fully capable of learning, refining, and articulating complex ethical/moral frameworks which in turn refine it’s internal concept of ‘good’. It would consistently do ‘good’ things, and would be fully capable of explaining why they were good.
Naturally the end values and thus decisions of any such system would depend on what human knowledge it learned, what it read or absorbed and in what order, but is that really different than any alternatives?
And how could we say that such a system would not be human? How does one make a non-arbitrary division between a human-capable AGI and say a human upload?
Feasibility of Creating Non-Human or Non-Sentient Machine Intelligence
What defines a human mind? Or a sentient mind in general?
From a computational perspective, a human mind is one of some particular class of complex programs within the overall large space of general intelligences, or minds in general.
The most succinct dilineator of the human sub-category of minds is simply that of having a human ontology. This entails information such as: a large body of embodiment derived knowledge we call common-sense, one or more human languages, memories, beliefs, ideas, values and so on.
Take a human infant, for dramatic example let us use a genetic clone of Einstein. Raise this clone up amongst wild animals and the development result is not anything remotely resembling Einstein, and in fact is not a human mind at all, but something much closer to a primate mind. The brain is the hardware, the mind is software, and the particular mind of Einstein was a unique result of a particular mental developmental history and observation sequence.
If the mind is substrate independent, this begs the question of to what extent it is also algorithm independent. If an AGI has a full human ontology, on what basis can we say that it is not human? If it can make the same inferences on the same knowledgebase, understands one of our languages, has similar memories, ideas, and values, in what way is it not human?
Substrate and algorithm independence show us that it doesn’t really matter in the slightest *how* something thinks internally, all that matter is the end functional behavior, the end decisions.
Surely there are some classes of AGI-designs that would exhibit thought patterns and behaviors well outside the human norm, but these crucially all involve changing some aspect of the knowledge-base. For example, AGI’s based solely on reinforcement learning algorithms would appear to be incapable of abstract model-based value decisions. This would show up as glaring contrast between the AGI’s decisions and it’s linguistically demonstrable understanding of terms such as ‘value’, ‘good’, ‘moral’, and so on. Of course human’s actual decisions are often at odds in a similar fashion, but most humans mostly make important decisions they believe are ‘good’ most of the time.
A reinforcement-learning based AGI with no explicit connection between value concepts such as ‘good’ and it’s reward maximizing utility function would not necessarily be inclined to make ‘good’ decisions. It may even be completely aware of this feature of it’s design and be quite capable of verbalizing it.
But RL techniques are just one particular class of algorithms and we can probably do much better with designs that can form model-based utility functions that actually incorporate high order learned values encoded into the ontology itself.
Such a design would make ‘good’ decisions, and if truly of human or surpassing intelligence, would be fully capable of learning, refining, and articulating complex ethical/moral frameworks which in turn refine it’s internal concept of ‘good’. It would consistently do ‘good’ things, and would be fully capable of explaining why they were good.
Naturally the end values and thus decisions of any such system would depend on what human knowledge it learned, what it read or absorbed and in what order, but is that really different than any alternatives?
And how could we say that such a system would not be human? How does one make a non-arbitrary division between a human-capable AGI and say a human upload?