an AI would be human-foolish to take the stupid short-sighted shortcut of trashing us for no reason
You don’t seem to understand how basic reasoning works (by LW standards). AFAICT, you are both privileging your hypothesis, and not weighing any evidence.
(Heck, you’re not even stating any evidence, only relying on repeated assertion of your framing of the situation.)
You still haven’t responded, for example, to my previous point about human-bacterium empathy. We don’t have empathy for bacteria, in part because we see them as interchangeable and easily replaced. If for some reason we want some more E. coli, we can just culture some.
In the same way, a superhuman intelligence that anticipates a possible future use for human beings, could always just keep our DNA on file… with a modification or two to make us more pliable.
Your entire argument is based on an enormous blind spot from your genetic heritage: you think an AI would inherently see you as, well, “human”, when out of the space of all possible minds, the odds of a given AI seeing you as worth bothering with are negligible at best. You simply don’t see this, because your built-in machinery for imagining minds automatically imagines human minds—even when you try to make it not do so.
Hell, the human-bacterium analogy is a perfect example: I’m using that example specifically because it’s a human way of thinking, even though it’s unlikely to match the utter lack of caring with which an arbitrary AGI is likely to view human beings. It’s wrong to even think of it as “viewing”, because that supposes a human model.
AI’s are not humans, unless they’re built to be humans, and the odds of them being human by accident are negligible.
Remember: evolution is happy to have elephants slowly starve to death when they get old, and to have animals that die struggling and painfully in the act of mating. Arbitrary optimization processes do not have human values.
Here I was assuming that PJ had integrated Descartes and Zen and was trying to understand the deep wisdom behind the koan.
The scary thing is, if I engage “extracting wisdom from koan” mode I can actually feel “you are both your hypothesis, and not weighing any evidence” fitting in neatly with actual insights that fit within PJ’s area of expertise. +1 to pattern matching on noise!
Yes, don’t know how that got deleted, because I saw it in there shortly before posting. My copy of Firefox sometimes does odd things during text editing.
You don’t seem to understand how basic reasoning works (by LW standards). AFAICT, you are both privileging your hypothesis, and not weighing any evidence.
(Heck, you’re not even stating any evidence, only relying on repeated assertion of your framing of the situation.)
You still haven’t responded, for example, to my previous point about human-bacterium empathy. We don’t have empathy for bacteria, in part because we see them as interchangeable and easily replaced. If for some reason we want some more E. coli, we can just culture some.
In the same way, a superhuman intelligence that anticipates a possible future use for human beings, could always just keep our DNA on file… with a modification or two to make us more pliable.
Your entire argument is based on an enormous blind spot from your genetic heritage: you think an AI would inherently see you as, well, “human”, when out of the space of all possible minds, the odds of a given AI seeing you as worth bothering with are negligible at best. You simply don’t see this, because your built-in machinery for imagining minds automatically imagines human minds—even when you try to make it not do so.
Hell, the human-bacterium analogy is a perfect example: I’m using that example specifically because it’s a human way of thinking, even though it’s unlikely to match the utter lack of caring with which an arbitrary AGI is likely to view human beings. It’s wrong to even think of it as “viewing”, because that supposes a human model.
AI’s are not humans, unless they’re built to be humans, and the odds of them being human by accident are negligible.
Remember: evolution is happy to have elephants slowly starve to death when they get old, and to have animals that die struggling and painfully in the act of mating. Arbitrary optimization processes do not have human values.
Stop thinking “intellect” (i.e. human) and start thinking “mechanical optimization process”.
[edit to add: “privileging”, which somehow got eaten while writing the original comment]
you are both privileging your hypothesis ?
Here I was assuming that PJ had integrated Descartes and Zen and was trying to understand the deep wisdom behind the koan.
The scary thing is, if I engage “extracting wisdom from koan” mode I can actually feel “you are both your hypothesis, and not weighing any evidence” fitting in neatly with actual insights that fit within PJ’s area of expertise. +1 to pattern matching on noise!
Even scarier thought: suppose that what we think of as intelligence or creativity consists, in simple fact, of pattern matching on random noise? ;-)
Yes, don’t know how that got deleted, because I saw it in there shortly before posting. My copy of Firefox sometimes does odd things during text editing.