What did he optimize? Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives—thanks to the “ready-to-eat” alternatives premade by other people—but by large the interesting are the ones where you generate an action given enormous number of alternatives.
edit: actually i commented on related topic . It’s btw why I don’t think EY is particularly intelligent. Maybe he’s optimizing what he’s posting for appearance, instead of predictive power, though, in which case okay he’s quite smart. Ultimately, in my eyes, he’s either not very bright philosopher or a quite bright sociopath, i don’t sure which.
You agree that Eliezer often does well at optimizing for problems with a small answer space (say 10 options), but what you are measuring is instead the ability to perform in situations with a very large answer space (say, 10^100 options), and you don’t see any evidence of that latter ability?
Could you point to some examples that DO demonstrate that latter ability? I’m genuinely curious what sort of resources are available for handling that sort of “large answer space”, and what it looks like when someone demonstrates that sort of intelligence, because it’s exactly the sort of intelligence I tend to be interested in.
I’d definitely agree that a big obstacle a lot of smart people run in to, is being able to quickly and accurately evaluate a large answer space. I’m not convinced either way on where Eliezer falls on that, though, since I can’t really think of any examples of what it looks like to succeed there.
I can only recall examples where I thought someone clearly had problems, or examples where someone solved it by consolidating the problem to a much smaller answer space (i.e. solving “how to meet women” by memorizing a dozen pickup routines)
Presenting a complex argument requires a whole host of sub-skills.
Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives—thanks to the “ready-to-eat” alternatives premade by other people—but by large the interesting are the ones where you generate an action given enormous number of alternatives.
I understand by this and the rest of your comment that you have motivated yourself (for some reason) into marginalizing EY and his work. I’ve no particular stake in defending EY—whether or not he is intelligent (and it’s highly probable he’s at least baseline human, all things (reasonably) considered), his work has been useful to myself and others, and that’s all that really matters.
On the other hand, you’re uncharitable and unnecessarily derogatory.
Presenting a complex argument requires a whole host of sub-skills.
Nowadays with the internet you can reach billion people, there’s a lot of self selection in audience.
On the other hand, you’re uncharitable and unnecessarily derogatory.
He’s spreading utter nonsense similar in nature to anti vaccination campaigning. The computational technology is important to medicine, and the belief cluster of “AI etc is going to kill us all” already results in the bombs being send to people. No i am not going to be charitable to a person who got good talent at presenting (not as fiction, but as ‘science’) complete misinformed BS that—if he ever gains traction—will be inspiration to more of this. I’m not charitable to any imams, any popes, any priests, and any cranks. Suppose he was “autodidact” biochemist (with no accomplishments in biochemistry) telling people about some chemical dangers picked from science fiction (and living off donations to support his ‘research’). CS is not any simpler than biochemistry. I’m afraid we have a necessity to not have politeness bias about such issues.
There is actual world-dangerous work going on in biochemistry. Every single day, People work with ebola, marburg, bird/swine flus, and hosts of other deadly diseases that have the potential to wipe out huge portions of humanity. All of this is treated EXTREMELY seriously with quarantines, regulations, laws, and massively redundant safety procedures. This is to protect us from things like ebola outbreaks in new york that have never happened outside of science fiction. If CS is not any simpler than biochemistry, and yet NO ONE is taking the dangers as seriously as those of biochemistry, then maybe there SHOULD be someone talking about “science fiction” risks.
Perhaps you should instead update on the fact that the experts in the field clearly are not reckless morons whom could be corrected by ignorant outsiders, in case of biochemistry, and probably, in case of CS as well.
I think we are justified, as a society, in taking biological risks much more seriously than computational risks.
My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can’t do anything like it for biology.
Programs basically stay put the way they are created; organisms don’t. For practical purposes, software never evolves; we don’t have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)
If a virulent disease does break loose, we have a hard time countering it, because we can’t re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.
the entire point of researching self improving AI is to move programs from the world of software that stays put the way it’s created, never evolving, into a world we don’t directly control.
Yes. I think the skeptics don’t take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.
If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.
While Ebola outbreaks in New York haven’t happened, Ebola is a real disease where we know exactly what it would do if there were an outbreak in New York. In all these cases we have a pretty good handle of what the diseases would do, and we’ve seen extreme examples of diseases in history, such as the Black Death wiping out much of Europe. That does seem like a distinct issue where no one has seen any form of serious danger from AI in the historical or present-day world.
If anything that’s underlies it even more- in the small sample we do have in this case things haven’t done much damage except for the narrow bit of damage they were programmed to do. So the essential point that we haven’t seen any serious danger from AI seems valid. (Although there’s been some work on making automated exploit searchers which conceivably attached to something like Stuxnet with a more malevolent goal set could be quite nasty.)
What did he optimize? Beyond being able to make some income in a dubious way. Ultimately, such definitions are pretty useless for computationally bounded processes. Some tasks nowadays involve choice between very few alternatives—thanks to the “ready-to-eat” alternatives premade by other people—but by large the interesting are the ones where you generate an action given enormous number of alternatives.
edit: actually i commented on related topic . It’s btw why I don’t think EY is particularly intelligent. Maybe he’s optimizing what he’s posting for appearance, instead of predictive power, though, in which case okay he’s quite smart. Ultimately, in my eyes, he’s either not very bright philosopher or a quite bright sociopath, i don’t sure which.
Just to be sure I understand you:
You agree that Eliezer often does well at optimizing for problems with a small answer space (say 10 options), but what you are measuring is instead the ability to perform in situations with a very large answer space (say, 10^100 options), and you don’t see any evidence of that latter ability?
Could you point to some examples that DO demonstrate that latter ability? I’m genuinely curious what sort of resources are available for handling that sort of “large answer space”, and what it looks like when someone demonstrates that sort of intelligence, because it’s exactly the sort of intelligence I tend to be interested in.
I’d definitely agree that a big obstacle a lot of smart people run in to, is being able to quickly and accurately evaluate a large answer space. I’m not convinced either way on where Eliezer falls on that, though, since I can’t really think of any examples of what it looks like to succeed there.
I can only recall examples where I thought someone clearly had problems, or examples where someone solved it by consolidating the problem to a much smaller answer space (i.e. solving “how to meet women” by memorizing a dozen pickup routines)
Presenting a complex argument requires a whole host of sub-skills.
I understand by this and the rest of your comment that you have motivated yourself (for some reason) into marginalizing EY and his work. I’ve no particular stake in defending EY—whether or not he is intelligent (and it’s highly probable he’s at least baseline human, all things (reasonably) considered), his work has been useful to myself and others, and that’s all that really matters.
On the other hand, you’re uncharitable and unnecessarily derogatory.
Nowadays with the internet you can reach billion people, there’s a lot of self selection in audience.
He’s spreading utter nonsense similar in nature to anti vaccination campaigning. The computational technology is important to medicine, and the belief cluster of “AI etc is going to kill us all” already results in the bombs being send to people. No i am not going to be charitable to a person who got good talent at presenting (not as fiction, but as ‘science’) complete misinformed BS that—if he ever gains traction—will be inspiration to more of this. I’m not charitable to any imams, any popes, any priests, and any cranks. Suppose he was “autodidact” biochemist (with no accomplishments in biochemistry) telling people about some chemical dangers picked from science fiction (and living off donations to support his ‘research’). CS is not any simpler than biochemistry. I’m afraid we have a necessity to not have politeness bias about such issues.
There is actual world-dangerous work going on in biochemistry. Every single day, People work with ebola, marburg, bird/swine flus, and hosts of other deadly diseases that have the potential to wipe out huge portions of humanity. All of this is treated EXTREMELY seriously with quarantines, regulations, laws, and massively redundant safety procedures. This is to protect us from things like ebola outbreaks in new york that have never happened outside of science fiction. If CS is not any simpler than biochemistry, and yet NO ONE is taking the dangers as seriously as those of biochemistry, then maybe there SHOULD be someone talking about “science fiction” risks.
Perhaps you should instead update on the fact that the experts in the field clearly are not reckless morons whom could be corrected by ignorant outsiders, in case of biochemistry, and probably, in case of CS as well.
I think we are justified, as a society, in taking biological risks much more seriously than computational risks.
My sense is that in practice, programming is much simpler than biochemistry. With software, we typically are working within a completely designed environment, and one designed to be easy for humans to reason about. We can do correctness proofs for software, we can’t do anything like it for biology.
Programs basically stay put the way they are created; organisms don’t. For practical purposes, software never evolves; we don’t have a measurable rate of bit-flip errors or the like resulting in working-but-strange programs. (And we have good theoretical reasons to believe this will remain true.)
If a virulent disease does break loose, we have a hard time countering it, because we can’t re-engineer our bodies. But we routinely patch deployed computer systems to make them resistant to particular instances of malware. The cost of a piece of experimental malware getting loose is very much smaller than with a disease.
the entire point of researching self improving AI is to move programs from the world of software that stays put the way it’s created, never evolving, into a world we don’t directly control.
Yes. I think the skeptics don’t take self-improving AI very seriously. Self-modifying programs in general are too hard to engineer, except in very narrow specialized way. A self-modifying program that rapidly achieves across-the-board superhuman ability seems like a fairy tale, not a serious engineering concern.
If there were an example of a program that self-improves in any nontrivial way at all, people might take this concern more seriously.
While Ebola outbreaks in New York haven’t happened, Ebola is a real disease where we know exactly what it would do if there were an outbreak in New York. In all these cases we have a pretty good handle of what the diseases would do, and we’ve seen extreme examples of diseases in history, such as the Black Death wiping out much of Europe. That does seem like a distinct issue where no one has seen any form of serious danger from AI in the historical or present-day world.
http://en.wikipedia.org/wiki/Stuxnet
If anything that’s underlies it even more- in the small sample we do have in this case things haven’t done much damage except for the narrow bit of damage they were programmed to do. So the essential point that we haven’t seen any serious danger from AI seems valid. (Although there’s been some work on making automated exploit searchers which conceivably attached to something like Stuxnet with a more malevolent goal set could be quite nasty.)