Re: All this talk of moral “danger” and things “better” than us, is the execution of a computation embodied in humans, nowhere else, and if you want an AI that follows the thought and cares about it, it will have to mirror humans in that sense.
“Better”—in the sense of competitive exclusion via natural selection. “Dangerous”—in that it might lead to the near-complete obliteration of our biosphere’s inheritance. No other moral overtones implied.
AI will likely want to avoid being obliterated—and will want to consume resources, and use them to expand its domain. It will share these properties with all living systems—not just us. It doesn’t need to do much “mirroring” to acquire these attributes—they are a naturally-occurring phenomenon for expected utility maximisers.
Re: All this talk of moral “danger” and things “better” than us, is the execution of a computation embodied in humans, nowhere else, and if you want an AI that follows the thought and cares about it, it will have to mirror humans in that sense.
“Better”—in the sense of competitive exclusion via natural selection. “Dangerous”—in that it might lead to the near-complete obliteration of our biosphere’s inheritance. No other moral overtones implied.
AI will likely want to avoid being obliterated—and will want to consume resources, and use them to expand its domain. It will share these properties with all living systems—not just us. It doesn’t need to do much “mirroring” to acquire these attributes—they are a naturally-occurring phenomenon for expected utility maximisers.