Systems without the ability to go beyond the mental model their creators have (at a certain point in time), are subject to whatever flaws that mental model possesses. I wouldn’t classify them as full intelligences.
I wouldn’t want a flawed system to be the thing to guide humanity to the future.
Systems without the ability to go beyond the mental model their creators have (at a certain point in time), are subject to whatever flaws that mental model possesses.
Where does the basis for deciding something to be a flaw reside?
In humans? No one knows. My best guess at the moment for the lowest level of model choice is some form of decentralised selectionist system, that is much as decision theoretic construct as real evolution is.
We do of course have higher level model choosing systems that might work on a decision theoretic basis, but they have models implicit in them which can be flawed.
Improving the mental model is right there at the centre of the box. Creating a GAI that doesn’t operate according to some sort of decision theory? That’s, well, out of the box crazy talk.
Are you objecting to the possibility of a General intelligence not based on a decision theory at its foundation, or do you just think one would be unsafe?
Do you think us humans are based on some form of decision theory?
Are you objecting to the possibility of a General intelligence not based on a decision theory at its foundation, or do you just think one would be unsafe?
Unsafe.
Do you think us humans are based on some form of decision theory?
No. And I wouldn’t trust a fellow human with that sort of uncontrolled power.
Please do not ever create an AI capable of recursively self improvement. ‘Thinking outside the box’ is a bug.
Systems without the ability to go beyond the mental model their creators have (at a certain point in time), are subject to whatever flaws that mental model possesses. I wouldn’t classify them as full intelligences.
I wouldn’t want a flawed system to be the thing to guide humanity to the future.
Where does the basis for deciding something to be a flaw reside?
In humans? No one knows. My best guess at the moment for the lowest level of model choice is some form of decentralised selectionist system, that is much as decision theoretic construct as real evolution is.
We do of course have higher level model choosing systems that might work on a decision theoretic basis, but they have models implicit in them which can be flawed.
Improving the mental model is right there at the centre of the box. Creating a GAI that doesn’t operate according to some sort of decision theory? That’s, well, out of the box crazy talk.
We might be having different definitions of thinking outside of the box, here.
Are you objecting to the possibility of a General intelligence not based on a decision theory at its foundation, or do you just think one would be unsafe?
Do you think us humans are based on some form of decision theory?
Unsafe.
No. And I wouldn’t trust a fellow human with that sort of uncontrolled power.