I’ll try to lay out my reasoning in clear steps, and perhaps you will be able to tell me where we differ exactly.
Hairyfigment is capable of reading Orwell’s 1984, and Banks’ Culture novels, and identifying that the people in the hypothetical 1984 world have less liberty than the people in the hypothetical Culture world.
This task does not require the full capabilities of hairyfigment’s brain, in fact it requires substantially less.
A program that does A+B has to be more complicated than a program that does A alone, where A and B are two different, significant sets of problems to solve. (EDIT: If these programs are efficiently written)
Given 1-3, a program that can emulate hairyfigment’s liberty-distinguishing faculty can be much, much less complicated than a program that can do that plus everything else hairyfigment’s brain can do.
If we can simulate a complete human brain that is the same as having solved the strong AI problem.
A program that can do everything hairyfigment’s brain can do is a program that simulates a complete human brain.
Given 4-6 it is much less complicated to emulate hairyfigment’s liberty-distinguishing faculty than to solve the strong AI problem.
Given 7, it is unreasonable to postulate a world where we have solved the strong AI problem, in spades, so much so we have a vastly superhuman AI, but we still haven’t solved the hairyfigment’s liberty-distinguishing faculty problem.
..It’s the hidden step where you move from examining two fictions, worlds created to be transparent to human examination, to assuming I have some general “liberty-distinguishing faculty”.
We have identified the point on which we differ, which is excellent progress. I used fictional worlds as examples, but would it solve the problem if I used North Korea and New Zealand as examples instead, or the world in 1814 and the world in 2014? Those worlds or nations were not created to be transparent to human examination but I believe you do have the faculty to distinguish between them.
I don’t see how this is harder than getting an AI to handle any other context-dependant, natural language descriptor, like “cold” or “heavy”. “Cold” does not have a single, unitary definition in physics but it is not that hard a problem to figure out when you should say “that drink is cold” or “that pool is cold” or “that liquid hydrogen is cold”. Children manage it and they are not vastly superhuman artificial intelligences.
A program that does A+B has to be more complicated than a program that does A alone, where A and B are two different, significant sets of problems to solve.
Incorrect. I can write a horrendously complicated program to solve 1+1; and a far simpler program to add any two integers.
Admittedly, neither of those are particularly significant problems; nonetheless, unnecessary complexity can be added to any program intended to do A alone.
It would be true to say that the shortest possible program capable of solving A+B must be more complex than the shortest possible program to solve A alone, though, so this minor quibble does not affect your conclusion.
Given 4-6 it is much less complicated to emulate hairyfigment’s liberty-distinguishing faculty than to solve the strong AI problem.
Granted.
Given 7, it is unreasonable to postulate a world where we have solved the strong AI problem, in spades, so much so we have a vastly superhuman AI, but we still haven’t solved the hairyfigment’s liberty-distinguishing faculty problem.
Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.
Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.
To clarify, it seems to me that modelling hairyfigment’s ability to decide whether people have liberty is not only simpler than modelling hairyfigment’s whole brain, but that it is also a subset of that problem. It does seem to me that you have to solve all subsets of Problem B before you can be said to have solved Problem B, hence you have to have solved the liberty-assessing problem if you have solved the strong AI problem, hence it makes no sense to postulate a world where you have a strong AI but can’t explain liberty to it.
Hmmm. That’s presumably true of hairyfigment’s brain; however, simulting a copy of any human brain would also be a solution to the strong AI problem. Some human brains are flawed in important ways (consider, for example, psychopaths) - given this, it is within the realm of possibility that there exists some human who has no conception of what ‘liberty’ means. Simulating his brain is also a solution of the Strong AI problem, but does not require solving the liberty-assessing problem.
I’ll try to lay out my reasoning in clear steps, and perhaps you will be able to tell me where we differ exactly.
Hairyfigment is capable of reading Orwell’s 1984, and Banks’ Culture novels, and identifying that the people in the hypothetical 1984 world have less liberty than the people in the hypothetical Culture world.
This task does not require the full capabilities of hairyfigment’s brain, in fact it requires substantially less.
A program that does A+B has to be more complicated than a program that does A alone, where A and B are two different, significant sets of problems to solve. (EDIT: If these programs are efficiently written)
Given 1-3, a program that can emulate hairyfigment’s liberty-distinguishing faculty can be much, much less complicated than a program that can do that plus everything else hairyfigment’s brain can do.
If we can simulate a complete human brain that is the same as having solved the strong AI problem.
A program that can do everything hairyfigment’s brain can do is a program that simulates a complete human brain.
Given 4-6 it is much less complicated to emulate hairyfigment’s liberty-distinguishing faculty than to solve the strong AI problem.
Given 7, it is unreasonable to postulate a world where we have solved the strong AI problem, in spades, so much so we have a vastly superhuman AI, but we still haven’t solved the hairyfigment’s liberty-distinguishing faculty problem.
..It’s the hidden step where you move from examining two fictions, worlds created to be transparent to human examination, to assuming I have some general “liberty-distinguishing faculty”.
We have identified the point on which we differ, which is excellent progress. I used fictional worlds as examples, but would it solve the problem if I used North Korea and New Zealand as examples instead, or the world in 1814 and the world in 2014? Those worlds or nations were not created to be transparent to human examination but I believe you do have the faculty to distinguish between them.
I don’t see how this is harder than getting an AI to handle any other context-dependant, natural language descriptor, like “cold” or “heavy”. “Cold” does not have a single, unitary definition in physics but it is not that hard a problem to figure out when you should say “that drink is cold” or “that pool is cold” or “that liquid hydrogen is cold”. Children manage it and they are not vastly superhuman artificial intelligences.
H.airyfigment, do you canmean detecting liberty in reality is different to, or harder than, detecting liberty in fiction?
Incorrect. I can write a horrendously complicated program to solve 1+1; and a far simpler program to add any two integers.
Admittedly, neither of those are particularly significant problems; nonetheless, unnecessary complexity can be added to any program intended to do A alone.
It would be true to say that the shortest possible program capable of solving A+B must be more complex than the shortest possible program to solve A alone, though, so this minor quibble does not affect your conclusion.
Granted.
Why? Just because the problem is less complicated, does not mean it will be solved first. A more complicated problem can be solved before a less complicated problem, especially if there is more known about it.
To clarify, it seems to me that modelling hairyfigment’s ability to decide whether people have liberty is not only simpler than modelling hairyfigment’s whole brain, but that it is also a subset of that problem. It does seem to me that you have to solve all subsets of Problem B before you can be said to have solved Problem B, hence you have to have solved the liberty-assessing problem if you have solved the strong AI problem, hence it makes no sense to postulate a world where you have a strong AI but can’t explain liberty to it.
Hmmm. That’s presumably true of hairyfigment’s brain; however, simulting a copy of any human brain would also be a solution to the strong AI problem. Some human brains are flawed in important ways (consider, for example, psychopaths) - given this, it is within the realm of possibility that there exists some human who has no conception of what ‘liberty’ means. Simulating his brain is also a solution of the Strong AI problem, but does not require solving the liberty-assessing problem.