A group of specialized AIs doesn’t need to have shared goals or shared representations of the world. A group of interacting specialized AIs would be certainly be a complex system that will likely exhibit unanticipated behavior, but this doesn’t mean that it will be an agent (an antropic model created in economy to model the behavior of humans).
I don’t think this is a meaningful reply, or perhaps it’s just question-begging.
If having a coherent goal is the point of the human in the loop, then you are quietly ignoring the hypothetical given that ‘every human skill has been transferred’ and your points are irrelevant. If having a coherent goal is not what the human is supposed to be doing, well, every agent can be considered to ‘exhibit unanticipated behavior’ from the point of view of its constituent parts (what human behavior would you anticipate from a single hair cell?), and it doesn’t matter what the behavior of the complex system is—just that there is behavior. We can even layer on evolutionary concerns here: these complex systems will be selected upon and only the ones that act like agents will survive and spread!
Assuming that AI technology will necessarily lead to a super-intelligent but essentially human-like mind is anthropomorphization in the same sense that the gods of traditional religions are anthropomorphizations of complex and poorly understood (at the time) phenomena such as the weather or biological cycles or ecology.
Yeah, whatever.
Arguing against ‘necessarily leading to a super-intelligent but essentially human-like mind’ is a big part of Eliezer and LW’s AI paradigm in general going back to the earliest writings & motivation for SIAI & LW, one of our perennial criticisms of mainstream SF, AI writers, and ‘machine ethics’ writers in particular, and a key reason for the perpetual interest by LWers in unusual models of intelligence like AIXI or in exotic kinds of decision theories.
If you’ve failed to realize this so profoundly that you can seriously write the above—accusing LW of naive religious-style anthropomorphizing! - all I can conclude is that you either are very dense or have not read much material.
I don’t think this is a meaningful reply, or perhaps it’s just question-begging.
If having a coherent goal is the point of the human in the loop, then you are quietly ignoring the hypothetical given that ‘every human skill has been transferred’ and your points are irrelevant. If having a coherent goal is not what the human is supposed to be doing, well, every agent can be considered to ‘exhibit unanticipated behavior’ from the point of view of its constituent parts (what human behavior would you anticipate from a single hair cell?), and it doesn’t matter what the behavior of the complex system is—just that there is behavior. We can even layer on evolutionary concerns here: these complex systems will be selected upon and only the ones that act like agents will survive and spread!
Yeah, whatever.
Arguing against ‘necessarily leading to a super-intelligent but essentially human-like mind’ is a big part of Eliezer and LW’s AI paradigm in general going back to the earliest writings & motivation for SIAI & LW, one of our perennial criticisms of mainstream SF, AI writers, and ‘machine ethics’ writers in particular, and a key reason for the perpetual interest by LWers in unusual models of intelligence like AIXI or in exotic kinds of decision theories.
If you’ve failed to realize this so profoundly that you can seriously write the above—accusing LW of naive religious-style anthropomorphizing! - all I can conclude is that you either are very dense or have not read much material.