The modern human nervous system is the result of upwards of a hundred thousand years of brutal field-testing. The basic components, and even whole submodules, can be traced back even further. A certain amount of resiliency is to be expected. If you want to start from scratch and aspire to the same or higher standards of performance, it might be sensible to be prepared to invest the same amount of time and capital that the BIG did.
That you have not yet been crippled by a moral paradox or other standard rhetorical trick is comparable to saying that a server remains secure after a child spent an afternoon poking around with it and trying out lists of default passwords: a good sign, certainly, and a test many would fail, but not in itself proof of perfection.
Indeed, on a list of things we can expect evolved brains to be, ROBUST is very high on the list. (“rational” is actually rather hard to come by. To some degree, rationality improves fitness. But often its cost outweighs its benefit, hence the sea slug.)
Additionally, people throw away problems if they can’t solve the answer or if getting the specifics of the answer are beyond their limits. A badly designed AI system wouldn’t have that option and so would be paralyzed by calculation.
I agree with the commenter above who said the best thing to stop anything like this from happening is an AI system with checks and balances which automatically throws out certain problems. In the abstract, that might conceivably be bad. In the real world it probably won’t be. Probably isn’t very inspiring or logically compelling but I think it’s the best that we can do.
Unless we design the first AI system with a complex goal system oriented around fixing itself that basically boils down to “do your best to find and solve any problems or contradictions within your system, ask for our help whenever you are unsure of an answer, then design a computer which can do the same task better than you, etc, then have the final computer begin the actual work of an AI”. The thought comes from Douglas Adams’ Hitchhiker books, I forget the names of the computers but it doesn’t matter.
To anyone who says it’s impossible or unfeasible to implement something like this: note that having one biased computer attempt to correct its own biases and create a less biased computer is in all relevant ways equivalent to having one biased human attempt to correct its own biases and create a less biased computer.
The modern human nervous system is the result of upwards of a hundred thousand years of brutal field-testing. The basic components, and even whole submodules, can be traced back even further. A certain amount of resiliency is to be expected. If you want to start from scratch and aspire to the same or higher standards of performance, it might be sensible to be prepared to invest the same amount of time and capital that the BIG did.
That you have not yet been crippled by a moral paradox or other standard rhetorical trick is comparable to saying that a server remains secure after a child spent an afternoon poking around with it and trying out lists of default passwords: a good sign, certainly, and a test many would fail, but not in itself proof of perfection.
Indeed, on a list of things we can expect evolved brains to be, ROBUST is very high on the list. (“rational” is actually rather hard to come by. To some degree, rationality improves fitness. But often its cost outweighs its benefit, hence the sea slug.)
Additionally, people throw away problems if they can’t solve the answer or if getting the specifics of the answer are beyond their limits. A badly designed AI system wouldn’t have that option and so would be paralyzed by calculation.
I agree with the commenter above who said the best thing to stop anything like this from happening is an AI system with checks and balances which automatically throws out certain problems. In the abstract, that might conceivably be bad. In the real world it probably won’t be. Probably isn’t very inspiring or logically compelling but I think it’s the best that we can do.
Unless we design the first AI system with a complex goal system oriented around fixing itself that basically boils down to “do your best to find and solve any problems or contradictions within your system, ask for our help whenever you are unsure of an answer, then design a computer which can do the same task better than you, etc, then have the final computer begin the actual work of an AI”. The thought comes from Douglas Adams’ Hitchhiker books, I forget the names of the computers but it doesn’t matter.
To anyone who says it’s impossible or unfeasible to implement something like this: note that having one biased computer attempt to correct its own biases and create a less biased computer is in all relevant ways equivalent to having one biased human attempt to correct its own biases and create a less biased computer.