Not to put too fine a point on it but many of the posters here have never sat through a single class on AI of any kind nor read any books on actually programming AI’s nor ever touched any of the common tools in the current state of the art nor learned about any existing or historical AI designs yet as long as they go along with the flow and stick to the right Applause Lights they don’t get anyone demanding they go off and read papers on it.
As a result conversations on here can occasionally be frustratingly similar to theological discussions with posters who’ve read pop-comp-sci material like Superintelligence who simply assume that an AI will instantly gain any capability up to and including things which would require more energy than there exists in the universe or more computational power than would be available from turning every atom in the universe into computronium.
Superintelligence is a broad overview of the topic without any aspirations for rigor, as far as I can tell, and it is pretty clear about that.
who simply assume that an AI will instantly gain any capability up to and including things which would require more energy than there exists in the universe or more computational power than would be available from turning every atom in the universe into computronium.
This seems uncharitable. The outside view certainly backs up something like
every jump in intelligence opens up new previously unexpected and unimaginable sources of energy.
Examples: fire, fossil fuels, nuclear. Same applies to computational power.
There is no clear reason for this correlation to disappear. Thus what we would currently deem
more energy than there exists in the universe
or
more computational power than would be available from turning every atom in the universe into computronium
might reflect our limited understanding of the Universe, rather than any kind of genuine limits.
The current state of the art doesn’t get anywhere close to the kind of general-purpose intelligence that (hypothetically but plausibly) might make AI either an existential threat or a solution to a lot of the human race’s problems.
So while I enthusiastically endorse the idea of anyone interested in AI finding out more about actually-existing AI research, either (1) the relevance of that research to the scenarios FAI people are worried about is rather small, or (2) those scenarios are going to turn out never to arise. And, so far as I can tell, we have very little ability to tell which.
(Perhaps the very fact that the state of the art isn’t close to general-purpose intelligence is evidence that those scenarios will never arise. But it doesn’t seem like very good evidence for that. We know that rather-general-purpose human-like intelligence is possible, because we have it, so our inability to make computers that have it is a limitation of our current technology and understanding rather than anything fundamental to the universe, and I know of no grounds for assuming that we won’t overcome those limitations. And the extent of variations within the human species seems like good reason to think that actually-existing physical things can have mental capacities well beyond the human average.)
It’s still extremely relevant since they have to grapple with watered-down versions of many of the exact same problems.
You might be concerned that a non-FAI will optimize for some scoring function and do things you don’t want while they’re actually dealing with the actual nuts and bolts of making modern AI’s where they want to make sure they don’t optimize for some scoring function and do things you don’t want (on a more mundane level). That kind of problem is in the first few pages of many AI textbooks yet the applause lights here hold that almost all AI researchers are blind to such possibilities.
There’s no need to convince me that general AI is possible in principle. We can use the same method to prove that nanobots and self replicating von neumann machines are perfectly possible but we’re still a lot way from actually building them.
it’s just frustrating: like watching someone trying to explain why proving code is important in the control software of a nuclear reactor (extremely true) who has no idea how code is proven, has never written even a hello-world program and every now and then talks as if they believe that exception handling is unknown to programmers mixed with references to magic. They’re making a reasonable point but mixing their language with references to magic and occasional absurdities.
Still, if someone is capable of grasping the argument “Any kind of software failure in the control systems of a nuclear reactor could have disastrous consequences; the total amount of software required isn’t too enormous; therefore it is worth going to great lengths, including formal correctness proofs, to ensure that the software is correct” then they’re right to make that argument even if their grasp of what kind of software is used for controlling a nuclear reactor is extremely tenuous. And if they say ”… because otherwise the reactor could explode and turn everyone in the British Isles into a hideous mutant with weird superpowers” then of course they’re hilariously wrong, but their wrongness is about the details of the catastrophic disaster rather than the (more important) fact that a catastrophic disaster could happen and needs preventing.
That’s absolutely true but it leads to two problems.
First: the obvious lack of experience/understanding of the nuts and bolts of it makes people from outside less likely to take the realistic parts of their warning seriously and may even lead to it being viewed as a subject of mockery which works against you.
Second: The failure modes that people suggest due to their lack of understanding can also be hilariously wrong like “a breach may be caused by the radiation giving part of the shielding magical superpowers which will then cause it to gain life and open a portal to R’lyeh” and they may even spend many paragraphs on talking about how serious that failure mode it while others who also don’t actually understand politely aplaude. This has some of the same unfortunate side effects: it makes people who are totally unfamiliar with the subject less likely to take the realistic parts of their warning seriously.
Not to put too fine a point on it but many of the posters here have never sat through a single class on AI of any kind nor read any books on actually programming AI’s nor ever touched any of the common tools in the current state of the art nor learned about any existing or historical AI designs yet as long as they go along with the flow and stick to the right Applause Lights they don’t get anyone demanding they go off and read papers on it.
As a result conversations on here can occasionally be frustratingly similar to theological discussions with posters who’ve read pop-comp-sci material like Superintelligence who simply assume that an AI will instantly gain any capability up to and including things which would require more energy than there exists in the universe or more computational power than would be available from turning every atom in the universe into computronium.
Beware of isolated demands for rigor.
http://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/
Superintelligence is a broad overview of the topic without any aspirations for rigor, as far as I can tell, and it is pretty clear about that.
This seems uncharitable. The outside view certainly backs up something like
Examples: fire, fossil fuels, nuclear. Same applies to computational power.
There is no clear reason for this correlation to disappear. Thus what we would currently deem
or
might reflect our limited understanding of the Universe, rather than any kind of genuine limits.
The current state of the art doesn’t get anywhere close to the kind of general-purpose intelligence that (hypothetically but plausibly) might make AI either an existential threat or a solution to a lot of the human race’s problems.
So while I enthusiastically endorse the idea of anyone interested in AI finding out more about actually-existing AI research, either (1) the relevance of that research to the scenarios FAI people are worried about is rather small, or (2) those scenarios are going to turn out never to arise. And, so far as I can tell, we have very little ability to tell which.
(Perhaps the very fact that the state of the art isn’t close to general-purpose intelligence is evidence that those scenarios will never arise. But it doesn’t seem like very good evidence for that. We know that rather-general-purpose human-like intelligence is possible, because we have it, so our inability to make computers that have it is a limitation of our current technology and understanding rather than anything fundamental to the universe, and I know of no grounds for assuming that we won’t overcome those limitations. And the extent of variations within the human species seems like good reason to think that actually-existing physical things can have mental capacities well beyond the human average.)
It’s still extremely relevant since they have to grapple with watered-down versions of many of the exact same problems.
You might be concerned that a non-FAI will optimize for some scoring function and do things you don’t want while they’re actually dealing with the actual nuts and bolts of making modern AI’s where they want to make sure they don’t optimize for some scoring function and do things you don’t want (on a more mundane level). That kind of problem is in the first few pages of many AI textbooks yet the applause lights here hold that almost all AI researchers are blind to such possibilities.
There’s no need to convince me that general AI is possible in principle. We can use the same method to prove that nanobots and self replicating von neumann machines are perfectly possible but we’re still a lot way from actually building them.
it’s just frustrating: like watching someone trying to explain why proving code is important in the control software of a nuclear reactor (extremely true) who has no idea how code is proven, has never written even a hello-world program and every now and then talks as if they believe that exception handling is unknown to programmers mixed with references to magic. They’re making a reasonable point but mixing their language with references to magic and occasional absurdities.
Yeah, I understand the frustration.
Still, if someone is capable of grasping the argument “Any kind of software failure in the control systems of a nuclear reactor could have disastrous consequences; the total amount of software required isn’t too enormous; therefore it is worth going to great lengths, including formal correctness proofs, to ensure that the software is correct” then they’re right to make that argument even if their grasp of what kind of software is used for controlling a nuclear reactor is extremely tenuous. And if they say ”… because otherwise the reactor could explode and turn everyone in the British Isles into a hideous mutant with weird superpowers” then of course they’re hilariously wrong, but their wrongness is about the details of the catastrophic disaster rather than the (more important) fact that a catastrophic disaster could happen and needs preventing.
That’s absolutely true but it leads to two problems.
First: the obvious lack of experience/understanding of the nuts and bolts of it makes people from outside less likely to take the realistic parts of their warning seriously and may even lead to it being viewed as a subject of mockery which works against you.
Second: The failure modes that people suggest due to their lack of understanding can also be hilariously wrong like “a breach may be caused by the radiation giving part of the shielding magical superpowers which will then cause it to gain life and open a portal to R’lyeh” and they may even spend many paragraphs on talking about how serious that failure mode it while others who also don’t actually understand politely aplaude. This has some of the same unfortunate side effects: it makes people who are totally unfamiliar with the subject less likely to take the realistic parts of their warning seriously.