A program that does A+B has to be more complicated than a program that does A alone, where A and B are two different, significant sets of problems to solve.
Incorrect. I can write a horrendously complicated program to solve 1+1; and a far simpler program to add any two integers.
Admittedly, neither of those are particularly significant problems; nonetheless, unnecessary complexity can be added to any program intended to do A alone.
It would be true to say that the shortest possible program capable of solving A+B must be more complex than the shortest possible program to solve A alone, though, so this minor quibble does not affect your conclusion.
Given 4-6 it is much less complicated to emulate hairyfigment’s liberty-distinguishing faculty than to solve the strong AI problem.
Granted.
Given 7, it is unreasonable to postulate a world where we have solved the strong AI problem, in spades, so much so we have a vastly superhuman AI, but we still haven’t solved the hairyfigment’s liberty-distinguishing faculty problem.
Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.
Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.
To clarify, it seems to me that modelling hairyfigment’s ability to decide whether people have liberty is not only simpler than modelling hairyfigment’s whole brain, but that it is also a subset of that problem. It does seem to me that you have to solve all subsets of Problem B before you can be said to have solved Problem B, hence you have to have solved the liberty-assessing problem if you have solved the strong AI problem, hence it makes no sense to postulate a world where you have a strong AI but can’t explain liberty to it.
Hmmm. That’s presumably true of hairyfigment’s brain; however, simulting a copy of any human brain would also be a solution to the strong AI problem. Some human brains are flawed in important ways (consider, for example, psychopaths) - given this, it is within the realm of possibility that there exists some human who has no conception of what ‘liberty’ means. Simulating his brain is also a solution of the Strong AI problem, but does not require solving the liberty-assessing problem.
Incorrect. I can write a horrendously complicated program to solve 1+1; and a far simpler program to add any two integers.
Admittedly, neither of those are particularly significant problems; nonetheless, unnecessary complexity can be added to any program intended to do A alone.
It would be true to say that the shortest possible program capable of solving A+B must be more complex than the shortest possible program to solve A alone, though, so this minor quibble does not affect your conclusion.
Granted.
Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.
To clarify, it seems to me that modelling hairyfigment’s ability to decide whether people have liberty is not only simpler than modelling hairyfigment’s whole brain, but that it is also a subset of that problem. It does seem to me that you have to solve all subsets of Problem B before you can be said to have solved Problem B, hence you have to have solved the liberty-assessing problem if you have solved the strong AI problem, hence it makes no sense to postulate a world where you have a strong AI but can’t explain liberty to it.
Hmmm. That’s presumably true of hairyfigment’s brain; however, simulting a copy of any human brain would also be a solution to the strong AI problem. Some human brains are flawed in important ways (consider, for example, psychopaths) - given this, it is within the realm of possibility that there exists some human who has no conception of what ‘liberty’ means. Simulating his brain is also a solution of the Strong AI problem, but does not require solving the liberty-assessing problem.