Improving the mental model is right there at the centre of the box. Creating a GAI that doesn’t operate according to some sort of decision theory? That’s, well, out of the box crazy talk.
Are you objecting to the possibility of a General intelligence not based on a decision theory at its foundation, or do you just think one would be unsafe?
Do you think us humans are based on some form of decision theory?
Are you objecting to the possibility of a General intelligence not based on a decision theory at its foundation, or do you just think one would be unsafe?
Unsafe.
Do you think us humans are based on some form of decision theory?
No. And I wouldn’t trust a fellow human with that sort of uncontrolled power.
Improving the mental model is right there at the centre of the box. Creating a GAI that doesn’t operate according to some sort of decision theory? That’s, well, out of the box crazy talk.
We might be having different definitions of thinking outside of the box, here.
Are you objecting to the possibility of a General intelligence not based on a decision theory at its foundation, or do you just think one would be unsafe?
Do you think us humans are based on some form of decision theory?
Unsafe.
No. And I wouldn’t trust a fellow human with that sort of uncontrolled power.