Right, but general optimization power is also really vague. The amount that an entity narrows potential future distributions and that amount that an entity can control the narrowing of potential future distributions are different things, which is a distinction ‘optimization power’ doesn’t quite seem to respect, unless you get deeper.
Right, but general optimization power is also really vague.
This sounds unattractive at the outset, but could one relate optimization power in economic terms? A machine super-optimizer would most likely have to become an economic power on its way to subverting the world’s industrial infrastructure. If one can relate optimization power to economic power, then one could make a strong argument for the inability of human civilization to control a machine super-optimizer.
That’s probably true so let’s not, I take your basic point.
Given that the (initial) aim of the project is to summarise SI’s arguments, is this a point that you think SI has been clear on (and if so, is there any post or paper you can direct me to) or do you think this is a problem with SI’s argument?
I have not engaged with SI’s arguments deeply enough to give them a fair assessment. I know that others think this is a point SI has been unclear on, but it also sounds like SI is moving towards clarity (and a project like this, which finds the holes where premises that should be conclusions are instead premises, will help them do that).
Right, but general optimization power is also really vague. The amount that an entity narrows potential future distributions and that amount that an entity can control the narrowing of potential future distributions are different things, which is a distinction ‘optimization power’ doesn’t quite seem to respect, unless you get deeper.
(We could do this for a long time.)
This sounds unattractive at the outset, but could one relate optimization power in economic terms? A machine super-optimizer would most likely have to become an economic power on its way to subverting the world’s industrial infrastructure. If one can relate optimization power to economic power, then one could make a strong argument for the inability of human civilization to control a machine super-optimizer.
That’s probably true so let’s not, I take your basic point.
Given that the (initial) aim of the project is to summarise SI’s arguments, is this a point that you think SI has been clear on (and if so, is there any post or paper you can direct me to) or do you think this is a problem with SI’s argument?
I have not engaged with SI’s arguments deeply enough to give them a fair assessment. I know that others think this is a point SI has been unclear on, but it also sounds like SI is moving towards clarity (and a project like this, which finds the holes where premises that should be conclusions are instead premises, will help them do that).