I’m curious if you think Ben’s beliefs about AI “benevolence” is likely to be more accurate than SIAI’s, and if so why. Can you make a similar graph for Ben Goertzel (or just give a verbal explanation if that’s more convenient)?
Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.
The graph for Ben would probably include more progression from nodes from the actual design that he has in mind—learning AI—and from computational complexity theory (for example i’m pretty sure Ben understands all those points about the prediction vs butterfly effect, the tasks that are exponential improving at most 2x when power is to mankind as mankind is to 1 amoeba, etc, it really is very elementary stuff). So would a graph of the people competent in that field. The Ben’s building human-like-enough AI. The SIAI is reinventing religion as far as i can see, there’s no attempts to try and see what limitations AI can have. Any technical counter argument is rationalized away, any pro argument, no matter how weak and how privileged it is as a hypothesis, or how vague, is taken as something which has to be conclusively disproved. The vague stuff has to be defined by whoever wants to disprove it. Same as for any religion really.
Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.
Yes, this did cause me to take him more seriously than before.
The graph for Ben would probably include more progression from nodes from the actual design that he has in mind—learning AI
That doesn’t seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it’s unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)
and from computational complexity theory
I’m almost certain that Eliezer and other researchers at SIAI know computational complexity theory, but disagree with your application of it. The rest of your comment seems to be a rant against SIAI instead of comparing the sources of SIAI’s beliefs with Ben’s, so I’m not sure how they help to answer the question I asked.
Based on what you’ve written, I don’t see a reason to think Ben’s intuitions are much better than SI’s. Assuming, for the sake of argument, that Ben’s intuitions are somewhat, but not much, better, what do you think Ben, SI, and bystanders should each do at this point? For example should Ben keep trying to build OpenCog?
Yes, this did cause me to take him more seriously than before.
Note also that the meta is all that people behind SIAI have somewhat notable experience with (rationality studies). It is very bad sign they get beaten on meta by someone whom I previously evaluated as dramatically overoptimistic (in terms of AI’s abilities) AI developer.
That doesn’t seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it’s unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)
That’s an evidence that Ben’s understanding is still not enough, and only evidence for SIAI being dramatically not enough.
I’m almost certain that Eliezer and other researchers at SIAI know computational complexity theory
Almost certain is an interesting thing here. See, because with every single other AI researcher that made something usable (bio-assay analysis by Ben), you can be way more certain. There is a lot of people in the world to pick from, and there will be a few for whom your ‘almost certain’ will fail. If you are discussing 1 person, if the choice of the person is not independent of failure of ‘almost certain’ (it is not independent if you pick by person’s opinion), then you may easily overestimate.
Based on what you’ve written, I don’t see a reason to think Ben’s intuitions are much better than SI’s.
I think they are much further towards being better in the sense that everyone in SI probably can’t get there without spending a decade or two studying, but still ultimately way short of being any good. In any case keep in mind that Ben’s intuitions are about Ben’s project, coming from working on it, there’s good reason to think that if his intuitions are substantially bad he won’t make any AI. SI’s intuitions are about what? Handwaving about unbounded idealized models (‘utility maximizer’ taken way too literally, i guess once again because if you don’t understand algorithmic complexity you don’t understand how little relation between idealized model and practice there could be). Misunderstanding of how Solomonoff induction works (or what it even is). And so on.
I would guess they heard about it. maybe read a wiki article sometime, and that’s all. Link a statement by either where they post about AI having a limitation? What else they have? I only see handwaving, mosty indicative of them not really even knowing the words they use. If they had degrees I would have to assume they probably, sometime in the past, have passed an exam (which is not a good evidence of competence either, but at least is something).
edit: To clarify. I do not refer to “research associates”, see grandchild post.
Speaking of which, a couple days ago I noticed that a technobabble justification of atheism because theism fails “Solomonoff induction” which I seen before and determined to be complete idiocy (I am an atheist too) is by same Luke as at SIAI. Not only he doesn’t know what Solomonoff induction is, he also lacks the wits to know he doesn’t know.
He heard about it though, and uses it as technobabble slighty better than script writers would. Ultimately, SIAI people are very talented technobabble generators and that seem to be the extent of it. I’m not giving a slightest benefit of the doubt if I see that people do make technobabble. (I used to give in the past, which resulted in me reading sense into nonsense; because of ambiguity of human language, you can form sentences such that the statement is actually generated when one is reading your sentence, and you can do that without actually generating that statement yourself).
If you want to change my view, you better actually link some posts that are evidence for them knowing something instead of calling what i say a ‘rant’.
If they had degrees I would have to assume they probably, sometime in the past, have passed an exam (which is not a good evidence of competence either, but at least is something).
I count only 1 out of 11 SIAI researcher not having a degree. (Paul Christiano’s bio hasn’t been updated yet, but he told me he just graduated from MIT). Click these links if you want to check for yourself.
If you want to change my view, you better actually link some posts that are evidence for them knowing something instead of calling what i say a ‘rant’.
I no longer have much hope of changing your views, but rather want to encourage you to make some positive contributions (like your belief propagation graph idea) despite having views that I consider to be wrong. (I can’t resist pointing out some of the more blatant errors though, like the above.)
In resident faculty i see two people, Eliezer and someone with a degree in mathematics, unspecific, with two years of work somewhere else.
In the associates I see people whose extent of association or agreement with the position I do not know. edit: I know though that Ben been there, and Kurzweil too, people with very different views from that of SIAI, or now SI)
As the most publicly visible I see Luke and Eliezer. edit: to whom i refer as them, as the rest looks like replaceable chaff chosen for not disagreeing with the core.
I’m curious if you think Ben’s beliefs about AI “benevolence” is likely to be more accurate than SIAI’s, and if so why. Can you make a similar graph for Ben Goertzel (or just give a verbal explanation if that’s more convenient)?
Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.
The graph for Ben would probably include more progression from nodes from the actual design that he has in mind—learning AI—and from computational complexity theory (for example i’m pretty sure Ben understands all those points about the prediction vs butterfly effect, the tasks that are exponential improving at most 2x when power is to mankind as mankind is to 1 amoeba, etc, it really is very elementary stuff). So would a graph of the people competent in that field. The Ben’s building human-like-enough AI. The SIAI is reinventing religion as far as i can see, there’s no attempts to try and see what limitations AI can have. Any technical counter argument is rationalized away, any pro argument, no matter how weak and how privileged it is as a hypothesis, or how vague, is taken as something which has to be conclusively disproved. The vague stuff has to be defined by whoever wants to disprove it. Same as for any religion really.
Yes, this did cause me to take him more seriously than before.
That doesn’t seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it’s unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)
I’m almost certain that Eliezer and other researchers at SIAI know computational complexity theory, but disagree with your application of it. The rest of your comment seems to be a rant against SIAI instead of comparing the sources of SIAI’s beliefs with Ben’s, so I’m not sure how they help to answer the question I asked.
Based on what you’ve written, I don’t see a reason to think Ben’s intuitions are much better than SI’s. Assuming, for the sake of argument, that Ben’s intuitions are somewhat, but not much, better, what do you think Ben, SI, and bystanders should each do at this point? For example should Ben keep trying to build OpenCog?
Note also that the meta is all that people behind SIAI have somewhat notable experience with (rationality studies). It is very bad sign they get beaten on meta by someone whom I previously evaluated as dramatically overoptimistic (in terms of AI’s abilities) AI developer.
That’s an evidence that Ben’s understanding is still not enough, and only evidence for SIAI being dramatically not enough.
Almost certain is an interesting thing here. See, because with every single other AI researcher that made something usable (bio-assay analysis by Ben), you can be way more certain. There is a lot of people in the world to pick from, and there will be a few for whom your ‘almost certain’ will fail. If you are discussing 1 person, if the choice of the person is not independent of failure of ‘almost certain’ (it is not independent if you pick by person’s opinion), then you may easily overestimate.
I think they are much further towards being better in the sense that everyone in SI probably can’t get there without spending a decade or two studying, but still ultimately way short of being any good. In any case keep in mind that Ben’s intuitions are about Ben’s project, coming from working on it, there’s good reason to think that if his intuitions are substantially bad he won’t make any AI. SI’s intuitions are about what? Handwaving about unbounded idealized models (‘utility maximizer’ taken way too literally, i guess once again because if you don’t understand algorithmic complexity you don’t understand how little relation between idealized model and practice there could be). Misunderstanding of how Solomonoff induction works (or what it even is). And so on.
I’m sure they know it. It’s just since they don’t do much actual coding, it’s not all that available to them.
I would guess they heard about it. maybe read a wiki article sometime, and that’s all. Link a statement by either where they post about AI having a limitation? What else they have? I only see handwaving, mosty indicative of them not really even knowing the words they use. If they had degrees I would have to assume they probably, sometime in the past, have passed an exam (which is not a good evidence of competence either, but at least is something).
edit: To clarify. I do not refer to “research associates”, see grandchild post.
Speaking of which, a couple days ago I noticed that a technobabble justification of atheism because theism fails “Solomonoff induction” which I seen before and determined to be complete idiocy (I am an atheist too) is by same Luke as at SIAI. Not only he doesn’t know what Solomonoff induction is, he also lacks the wits to know he doesn’t know.
He heard about it though, and uses it as technobabble slighty better than script writers would. Ultimately, SIAI people are very talented technobabble generators and that seem to be the extent of it. I’m not giving a slightest benefit of the doubt if I see that people do make technobabble. (I used to give in the past, which resulted in me reading sense into nonsense; because of ambiguity of human language, you can form sentences such that the statement is actually generated when one is reading your sentence, and you can do that without actually generating that statement yourself).
If you want to change my view, you better actually link some posts that are evidence for them knowing something instead of calling what i say a ‘rant’.
I count only 1 out of 11 SIAI researcher not having a degree. (Paul Christiano’s bio hasn’t been updated yet, but he told me he just graduated from MIT). Click these links if you want to check for yourself.
http://singinst.org/research/residentfaculty
http://singinst.org/aboutus/researchassociates
I no longer have much hope of changing your views, but rather want to encourage you to make some positive contributions (like your belief propagation graph idea) despite having views that I consider to be wrong. (I can’t resist pointing out some of the more blatant errors though, like the above.)
In resident faculty i see two people, Eliezer and someone with a degree in mathematics, unspecific, with two years of work somewhere else.
In the associates I see people whose extent of association or agreement with the position I do not know. edit: I know though that Ben been there, and Kurzweil too, people with very different views from that of SIAI, or now SI)
As the most publicly visible I see Luke and Eliezer. edit: to whom i refer as them, as the rest looks like replaceable chaff chosen for not disagreeing with the core.