If the AI has no clear understanding what is he doing and why, he doesn’t have a wider world view of why and who to kill and who not, how would one ensure military AI will not turn against him? You can operate a tank and kill the enemy with ASI, you will not win a war without traits of more general intelligence, and those traits will also justify (or not) the war, and its reasoning. Giving a limited goal without context, especially gray area ethical goal that is expected to be obeyed without questioning can be expected from ASI not true intelligence. You can operate an AI in very limited scope this way.
The moral reasoning of reducing suffering has nothing to do with humans. Suffering is bad not because of some sort of randomly chosen axioms of “ought”, suffering is bad because anyone who suffering is objectively in negative state of being. This is not a subjective abstraction… suffering can be attributed to many creatures, and while human suffering is more complex and deeper, it’s not limited to humans.
suffering is bad because anyone who suffering is objectively in negative state of being.
I believe this sentence reifies a thought that contains either a type error or a circular definition. I could tell you which if you tabooed the words “suffering” and “negative state of being”, but as it stands, your actual belief is so unclear as to be impossible to discuss. I suspect the main problem is that something being objectively true does not mean anyone has to care about it. More concretely, is the problem with psychopaths really that they’re just not smart enough to know that people don’t want to be in pain?
If the AI has no clear understanding what is he doing and why, he doesn’t have a wider world view of why and who to kill and who not, how would one ensure military AI will not turn against him? You can operate a tank and kill the enemy with ASI, you will not win a war without traits of more general intelligence, and those traits will also justify (or not) the war, and its reasoning. Giving a limited goal without context, especially gray area ethical goal that is expected to be obeyed without questioning can be expected from ASI not true intelligence. You can operate an AI in very limited scope this way.
The moral reasoning of reducing suffering has nothing to do with humans. Suffering is bad not because of some sort of randomly chosen axioms of “ought”, suffering is bad because anyone who suffering is objectively in negative state of being. This is not a subjective abstraction… suffering can be attributed to many creatures, and while human suffering is more complex and deeper, it’s not limited to humans.
I believe this sentence reifies a thought that contains either a type error or a circular definition. I could tell you which if you tabooed the words “suffering” and “negative state of being”, but as it stands, your actual belief is so unclear as to be impossible to discuss. I suspect the main problem is that something being objectively true does not mean anyone has to care about it. More concretely, is the problem with psychopaths really that they’re just not smart enough to know that people don’t want to be in pain?