Many animals are not smart enough to be social. We are talking about a super-intelligent AGI here. I’ve given several presentations at conferences including AGI-09 with J Storrs Hall showing that animals are sociable to the extent that their cognitive apparatus can support it (not to mention the incredible socialness of bees, termites, etc.)
What do we gain from social interactions with dogs? Do we honestly suffer no losses when we mindlessly trash the rain forests? Examples to support my “bare assumptions” are EASY to come by (but thanks for asking—I just wish that other people here would give me examples when I ask).
Enslavement is an excellent short-term solution; however, in the long-term, it is virtually always a net negative to the system as a whole (i.e. it is selfish and stupid when viewed from the longest term—and moreso the more organized and inter-related the system is). Once again, we are talking about a super-intelligence, not short-sighted, stupid humans (who are, nonetheless, inarguably getting better and better with time).
What do we gain from social interactions with dogs? … Examples to support my “bare assumptions” are EASY to come by.
Happy to hear that. Because it then becomes reasonable to assume that you would not find it burdensome to share those examples. We are talking about benefits that a powerful AI would derive from social interactions with humans.
I hope you have something more than the implied analogy of the human-canine relationship. Because there are many other species just as intelligent as dogs with which we humans do not share quite so reciprocal a relationship. And, perhaps it is just my pride, but I don’t really think that I would appreciate being treated like a dog by my AI master. ETA: And I don’t know of any human dog-lovers who keep 6 billion pets.
Because there are many other species just as intelligent as dogs with which we humans do not share quite so reciprocal a relationship.
Absolutely. Because dogs cooperate with us and we with them and the other species don’t.
And, perhaps it is just my pride, but I don’t really think that I would appreciate being treated like a dog by my AI master.
And immediately the human prejudice comes out. We have terrible behavior when we’re on the top of the pile and expects others to have it as well. It’s almost exactly the same as when people complain bitterly when they’re oppressed and then, when they are on top, they oppress others even worse.
What is wrong with the human-canine analogy (which I thought I did more than imply) is the baggage that you are bringing to that relationship. Both parties benefit from the relationship. The dog benefits less from that relationship than you would benefit from an AGI relationship because the dog is less competent and intelligent than you are AND because the dog generally likes the treatment that it receives (whereas you would be unhappy with similar treatment).
Dogs are THE BEST analogy because they are the closest existing example to what most people are willing to concede is likely to be our relationship with a super-advanced AGI.
Oh, and dogs don’t really have a clue as to what they do for us, so why do you expect me to be able to come up with what we will do for an advanced AGI? If we’re willing to cooperate, there will be plenty for us to do of value that will fulfill our goals as well. We just have to avoid being too paranoid and short-sighted to see it.
earthworm—three orders of magnitude--> small lizard—three orders of magnitude--> dog—three orders of magnitude--> human—thirty orders of magnitude--> weakly superhuman AGI—several thousand orders of magnitude--> strong AI
If a recursively self-improving process stopped just far enough above us to consider us pets and did so, I would seriously question whether it was genuinely recursive, or if it was just gains from debugging and streamlining human thought process. ie, I could see a self-modifying transhuman acting in the manner you describe. But not an artificial intelligence, not unless it was very carefully designed.
Many animals are not smart enough to be social. We are talking about a super-intelligent AGI here. I’ve given several presentations at conferences including AGI-09 with J Storrs Hall showing that animals are sociable to the extent that their cognitive apparatus can support it (not to mention the incredible socialness of bees, termites, etc.)
What do we gain from social interactions with dogs? Do we honestly suffer no losses when we mindlessly trash the rain forests? Examples to support my “bare assumptions” are EASY to come by (but thanks for asking—I just wish that other people here would give me examples when I ask).
Enslavement is an excellent short-term solution; however, in the long-term, it is virtually always a net negative to the system as a whole (i.e. it is selfish and stupid when viewed from the longest term—and moreso the more organized and inter-related the system is). Once again, we are talking about a super-intelligence, not short-sighted, stupid humans (who are, nonetheless, inarguably getting better and better with time).
Happy to hear that. Because it then becomes reasonable to assume that you would not find it burdensome to share those examples. We are talking about benefits that a powerful AI would derive from social interactions with humans.
I hope you have something more than the implied analogy of the human-canine relationship. Because there are many other species just as intelligent as dogs with which we humans do not share quite so reciprocal a relationship. And, perhaps it is just my pride, but I don’t really think that I would appreciate being treated like a dog by my AI master. ETA: And I don’t know of any human dog-lovers who keep 6 billion pets.
Absolutely. Because dogs cooperate with us and we with them and the other species don’t.
And immediately the human prejudice comes out. We have terrible behavior when we’re on the top of the pile and expects others to have it as well. It’s almost exactly the same as when people complain bitterly when they’re oppressed and then, when they are on top, they oppress others even worse.
What is wrong with the human-canine analogy (which I thought I did more than imply) is the baggage that you are bringing to that relationship. Both parties benefit from the relationship. The dog benefits less from that relationship than you would benefit from an AGI relationship because the dog is less competent and intelligent than you are AND because the dog generally likes the treatment that it receives (whereas you would be unhappy with similar treatment).
Dogs are THE BEST analogy because they are the closest existing example to what most people are willing to concede is likely to be our relationship with a super-advanced AGI.
Oh, and dogs don’t really have a clue as to what they do for us, so why do you expect me to be able to come up with what we will do for an advanced AGI? If we’re willing to cooperate, there will be plenty for us to do of value that will fulfill our goals as well. We just have to avoid being too paranoid and short-sighted to see it.
The scale is all out.
earthworm—three orders of magnitude--> small lizard—three orders of magnitude--> dog—three orders of magnitude--> human—thirty orders of magnitude--> weakly superhuman AGI—several thousand orders of magnitude--> strong AI
If a recursively self-improving process stopped just far enough above us to consider us pets and did so, I would seriously question whether it was genuinely recursive, or if it was just gains from debugging and streamlining human thought process. ie, I could see a self-modifying transhuman acting in the manner you describe. But not an artificial intelligence, not unless it was very carefully designed.
Stop wasting our time.