One of my fundamental contentions is that empathy is a requirement for intelligence beyond a certain point because the consequences of lacking it are too severe to overcome.
That is probably because you don’t share a definition of intelligence with most of those here.
Nope. I agree with the vast majority of the vetta definitions.
But let’s go with Marcus Hutter—“There are strong arguments that AIXI is the most intelligent unbiased agent possible in the sense that AIXI behaves optimally in any computable environment.”
Now, which is more optimal—opting to play a positive-sum game of potentially infinite length and utility with cooperating humans OR passing up the game forever for a modest short-term gain?
Assume, for the purposes of argument, that the AGI does not have an immediate pressing need for the gain (since we could then go into a recursion of how pressing is the need—and yes, if the need is pressing enough, the intelligent thing to do unless the agent’s goal is to preserve humanity is to take the short-term gain and wipe out humanity—but how would a super-intelligent AGI have gotten itself into that situation?). This should answer all of the questions about “Well, what if the AGI had a short-term preference and humans weren’t it”.
I am jumping in here from Recent Comments, so perhaps I am missing context—but how is AIXI interacting with humanity an infinite positive-sum gain for it?
It doesn’t seem like AIXI could even expect zero-sum gains from humanity: we are using up a lot of what could be computronium.
That definition doesn’t explicity mention goals. Many of the definitions do explicity mention goals. What the definitions usually don’t mention is what those goals are—and that permits super-villains, along the lines of General Zod.
If (as it appears) you want to argue that evolution is likely to produce super-saints—rather than super-villains—then that’s a bit of a different topic. If you wanted to argue that, “requirement” was probably the wrong way of putting it.
That is probably because you don’t share a definition of intelligence with most of those here.
Perhaps look through http://www.vetta.org/definitions-of-intelligence/ - and see if you can find your position.
Nope. I agree with the vast majority of the vetta definitions.
But let’s go with Marcus Hutter—“There are strong arguments that AIXI is the most intelligent unbiased agent possible in the sense that AIXI behaves optimally in any computable environment.”
Now, which is more optimal—opting to play a positive-sum game of potentially infinite length and utility with cooperating humans OR passing up the game forever for a modest short-term gain?
Assume, for the purposes of argument, that the AGI does not have an immediate pressing need for the gain (since we could then go into a recursion of how pressing is the need—and yes, if the need is pressing enough, the intelligent thing to do unless the agent’s goal is to preserve humanity is to take the short-term gain and wipe out humanity—but how would a super-intelligent AGI have gotten itself into that situation?). This should answer all of the questions about “Well, what if the AGI had a short-term preference and humans weren’t it”.
I am jumping in here from Recent Comments, so perhaps I am missing context—but how is AIXI interacting with humanity an infinite positive-sum gain for it?
It doesn’t seem like AIXI could even expect zero-sum gains from humanity: we are using up a lot of what could be computronium.
That definition doesn’t explicity mention goals. Many of the definitions do explicity mention goals. What the definitions usually don’t mention is what those goals are—and that permits super-villains, along the lines of General Zod.
If (as it appears) you want to argue that evolution is likely to produce super-saints—rather than super-villains—then that’s a bit of a different topic. If you wanted to argue that, “requirement” was probably the wrong way of putting it.