Good point. Part of my response to this is that my plan would be to provide sub-arguments leading up to each of the premises in this broad brush argument and a lot more detail would be included in these. However, I think you’re right that I also need to be more clear on definitions as well.
In terms of the specific definition of superintelligence, I had in mind Chalmer’s definition: “Let us say that AI++ (or
superintelligence) is AI of far greater than human level (say, at least as far beyond the most intelligent human as the most intelligent human is beyond a mouse)” combined with the view (expressed here, for example) that we are interested in optimisation power rather than some more general sense of intelligence.
Right, but general optimization power is also really vague. The amount that an entity narrows potential future distributions and that amount that an entity can control the narrowing of potential future distributions are different things, which is a distinction ‘optimization power’ doesn’t quite seem to respect, unless you get deeper.
Right, but general optimization power is also really vague.
This sounds unattractive at the outset, but could one relate optimization power in economic terms? A machine super-optimizer would most likely have to become an economic power on its way to subverting the world’s industrial infrastructure. If one can relate optimization power to economic power, then one could make a strong argument for the inability of human civilization to control a machine super-optimizer.
That’s probably true so let’s not, I take your basic point.
Given that the (initial) aim of the project is to summarise SI’s arguments, is this a point that you think SI has been clear on (and if so, is there any post or paper you can direct me to) or do you think this is a problem with SI’s argument?
I have not engaged with SI’s arguments deeply enough to give them a fair assessment. I know that others think this is a point SI has been unclear on, but it also sounds like SI is moving towards clarity (and a project like this, which finds the holes where premises that should be conclusions are instead premises, will help them do that).
I had in mind Chalmer’s definition: “Let us say that AI++ (or superintelligence) is AI of far greater than human level (say, at least as far beyond the most intelligent human as the most intelligent human is beyond a mouse)”
Is this in fact a definition? Is there a meaningful way we can comparatively measure the intelligence of both a human and a mouse? I don’t know of any way we can compare that doesn’t assume that humans are smarter. (Cortical folding, brain vs. body mass.)
In fact, is there a definition of the kind of superintelligence discussed here at length at all? We can do this through information-theory, and define intelligence with regards to optimization power, but how does one relate such definitions to the supposed capabilities of super-intelligences discussed here?
Good point. Part of my response to this is that my plan would be to provide sub-arguments leading up to each of the premises in this broad brush argument and a lot more detail would be included in these. However, I think you’re right that I also need to be more clear on definitions as well.
In terms of the specific definition of superintelligence, I had in mind Chalmer’s definition: “Let us say that AI++ (or superintelligence) is AI of far greater than human level (say, at least as far beyond the most intelligent human as the most intelligent human is beyond a mouse)” combined with the view (expressed here, for example) that we are interested in optimisation power rather than some more general sense of intelligence.
Right, but general optimization power is also really vague. The amount that an entity narrows potential future distributions and that amount that an entity can control the narrowing of potential future distributions are different things, which is a distinction ‘optimization power’ doesn’t quite seem to respect, unless you get deeper.
(We could do this for a long time.)
This sounds unattractive at the outset, but could one relate optimization power in economic terms? A machine super-optimizer would most likely have to become an economic power on its way to subverting the world’s industrial infrastructure. If one can relate optimization power to economic power, then one could make a strong argument for the inability of human civilization to control a machine super-optimizer.
That’s probably true so let’s not, I take your basic point.
Given that the (initial) aim of the project is to summarise SI’s arguments, is this a point that you think SI has been clear on (and if so, is there any post or paper you can direct me to) or do you think this is a problem with SI’s argument?
I have not engaged with SI’s arguments deeply enough to give them a fair assessment. I know that others think this is a point SI has been unclear on, but it also sounds like SI is moving towards clarity (and a project like this, which finds the holes where premises that should be conclusions are instead premises, will help them do that).
Is this in fact a definition? Is there a meaningful way we can comparatively measure the intelligence of both a human and a mouse? I don’t know of any way we can compare that doesn’t assume that humans are smarter. (Cortical folding, brain vs. body mass.)
In fact, is there a definition of the kind of superintelligence discussed here at length at all? We can do this through information-theory, and define intelligence with regards to optimization power, but how does one relate such definitions to the supposed capabilities of super-intelligences discussed here?