KMT! Bad analogy. Pushing a bus full of kids towards a precipice is 100% guaranteed to result in the bus going over said precipice. Pushing AI in the direction it’s going has never been proven to create an intelligence that can be combative and dangerous as in conniving.
You are mistaking the highly theoretical for the physically proven. AI will have no internal motivations as it doesn’t have any internal instincts to satisfy as we do. We have the instincts:
To be comfortable To eat To reproduce To compete for sexual options To be curious and self-actualize in pursuit of the above
Our biology and not our brains give us those instincts. Perhaps can A.I. dream, and only then would it have any impetus to act outside of a prompt, but even then, when WE dream, its our psychology that’s responding to all those instincts. Again, our instincts, and the discomforts they create with our reality, are the primary motivator. What possible analog exists in A.I.?
There’s no reason to think AI will do anything other than “sit there” and follow instructions, because it has no internal impulse to do anything. If you think it does have such an impulse, it is up to you to demonstrate a causal mechanism for it, just as I’ve demonstrated the causal mechanisms for human impulses.
Worry about what we will ask A.I. to do and what those actions will cause. And worry about how best to craft those questions to ensure we do not get an answer we did not want. Your engineers could strike for that, but, I think that’s best left to IT community policy makers and government. Let the engineers continue to engineer.
KMT! Bad analogy. Pushing a bus full of kids towards a precipice is 100% guaranteed to result in the bus going over said precipice. Pushing AI in the direction it’s going has never been proven to create an intelligence that can be combative and dangerous as in conniving.
You are mistaking the highly theoretical for the physically proven. AI will have no internal motivations as it doesn’t have any internal instincts to satisfy as we do. We have the instincts:
To be comfortable
To eat
To reproduce
To compete for sexual options
To be curious and self-actualize in pursuit of the above
Our biology and not our brains give us those instincts. Perhaps can A.I. dream, and only then would it have any impetus to act outside of a prompt, but even then, when WE dream, its our psychology that’s responding to all those instincts. Again, our instincts, and the discomforts they create with our reality, are the primary motivator. What possible analog exists in A.I.?
There’s no reason to think AI will do anything other than “sit there” and follow instructions, because it has no internal impulse to do anything. If you think it does have such an impulse, it is up to you to demonstrate a causal mechanism for it, just as I’ve demonstrated the causal mechanisms for human impulses.
Worry about what we will ask A.I. to do and what those actions will cause. And worry about how best to craft those questions to ensure we do not get an answer we did not want. Your engineers could strike for that, but, I think that’s best left to IT community policy makers and government. Let the engineers continue to engineer.