Here’s one suggestion: focus on the causes of the intuition. If the intuition is based on something we would accept as rational evidence if it were suitably cleaned up and put into rigorous form, then we should regard that as an additional argument for whatever. If the intuition is based on subject matter we would disregard in other circumstances or flawed reasons, then we can regard that as evidence against the whatever.
This is a little abstract, so I’ll give a double example:
recently there’s been a lot of research into the origins of religious belief, focusing on intuitive versus analytical styles of thinking. To the extent that explicit analytical thought is superior at truth-gathering, we should take this as evidence for atheism and against theism.
This area of research has also focused on when religious belief develops, and there’s evidence that the core of religious belief is formed in childhood because children ascribe agency to all sorts of observations, while lack of agency is more a difficult learned adult way of thinking (and as things like the gambler’s fallacy show, is often not learned even then); to the extent that we trust adult thinking over childhood thinking, we will again regard this as evidence against theism and for atheism.
So, what is the origin of intuitions about things like AI and the future performance of machines...? (I’ll just note that I’ve seen a little evidence that young children are also vitalists.)
For example, if there were such a thing as a gene for optimism versus pessimism, you
might believe that you had an equal chance of inheriting your mother’s optimism gene or your
father’s pessimism gene. You might further believe that your sister had the same chances as
you, but via an independent draw, and following Mendel’s rules of inheritance. You might
even believe that humankind would have evolved to be more pessimistic, had they evolved
in harsher environments. Beliefs of this sort seem central to scientific discussions about the
origin of human beliefs, such as occur in evolutionary psychology. [...]
Consider, for example, two astronomers who disagree about whether the
universe is open (and infinite) or closed (and finite). Assume that they are both aware of
the same relevant cosmological data, and that they try to be Bayesians, and therefore want
to attribute their difference of opinion to differing priors about the size of the universe.
This paper shows that neither astronomer can believe that, regardless of the size of the
universe, nature was equally likely to have switched their priors. Each astronomer must
instead believe that his prior would only have favored a smaller universe in situations where
a smaller universe was actually more likely. Furthermore, he must believe that the other
astronomer’s prior would not track the actual size of the universe in this way; other priors
can only track universe size indirectly, by tracking his prior. Thus each person must believe
that prior origination processes make his prior more correlated with reality than others’
priors.
As a result, these astronomers cannot believe that their differing priors arose due to the
expression of differing genes inherited from their parents in the usual way. After all, the usual
rules of genetic inheritance treat the two astronomers symmetrically, and do not produce
individual genetic variations that are correlated with the size of the universe.
This paper thereby shows that agents who agree enough about the origins of their priors
must have the same prior.
Here’s one suggestion: focus on the causes of the intuition.
So, what is the origin of intuitions about things like AI and the future performance of machines...? (I’ll just note that I’ve seen a little evidence that young children are also vitalists.)
I’ve posted about that (as Dmytry), the belief propagation graph (which shows what paths can’t be the cause of intuitions due to too long propagation delay), that was one of the things which convinced me that trying to explain anything to LW is a waste of time, and that critique without explanation is more effective because explanatory critique gets rationalized away, while the critique of the form “you suck” makes people think (a little) what caused the impression in question and examine themselves somewhat, in the way in which they don’t if they are given actual, detailed explanation.
I’m curious if you think Ben’s beliefs about AI “benevolence” is likely to be more accurate than SIAI’s, and if so why. Can you make a similar graph for Ben Goertzel (or just give a verbal explanation if that’s more convenient)?
Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.
The graph for Ben would probably include more progression from nodes from the actual design that he has in mind—learning AI—and from computational complexity theory (for example i’m pretty sure Ben understands all those points about the prediction vs butterfly effect, the tasks that are exponential improving at most 2x when power is to mankind as mankind is to 1 amoeba, etc, it really is very elementary stuff). So would a graph of the people competent in that field. The Ben’s building human-like-enough AI. The SIAI is reinventing religion as far as i can see, there’s no attempts to try and see what limitations AI can have. Any technical counter argument is rationalized away, any pro argument, no matter how weak and how privileged it is as a hypothesis, or how vague, is taken as something which has to be conclusively disproved. The vague stuff has to be defined by whoever wants to disprove it. Same as for any religion really.
Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.
Yes, this did cause me to take him more seriously than before.
The graph for Ben would probably include more progression from nodes from the actual design that he has in mind—learning AI
That doesn’t seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it’s unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)
and from computational complexity theory
I’m almost certain that Eliezer and other researchers at SIAI know computational complexity theory, but disagree with your application of it. The rest of your comment seems to be a rant against SIAI instead of comparing the sources of SIAI’s beliefs with Ben’s, so I’m not sure how they help to answer the question I asked.
Based on what you’ve written, I don’t see a reason to think Ben’s intuitions are much better than SI’s. Assuming, for the sake of argument, that Ben’s intuitions are somewhat, but not much, better, what do you think Ben, SI, and bystanders should each do at this point? For example should Ben keep trying to build OpenCog?
Yes, this did cause me to take him more seriously than before.
Note also that the meta is all that people behind SIAI have somewhat notable experience with (rationality studies). It is very bad sign they get beaten on meta by someone whom I previously evaluated as dramatically overoptimistic (in terms of AI’s abilities) AI developer.
That doesn’t seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it’s unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)
That’s an evidence that Ben’s understanding is still not enough, and only evidence for SIAI being dramatically not enough.
I’m almost certain that Eliezer and other researchers at SIAI know computational complexity theory
Almost certain is an interesting thing here. See, because with every single other AI researcher that made something usable (bio-assay analysis by Ben), you can be way more certain. There is a lot of people in the world to pick from, and there will be a few for whom your ‘almost certain’ will fail. If you are discussing 1 person, if the choice of the person is not independent of failure of ‘almost certain’ (it is not independent if you pick by person’s opinion), then you may easily overestimate.
Based on what you’ve written, I don’t see a reason to think Ben’s intuitions are much better than SI’s.
I think they are much further towards being better in the sense that everyone in SI probably can’t get there without spending a decade or two studying, but still ultimately way short of being any good. In any case keep in mind that Ben’s intuitions are about Ben’s project, coming from working on it, there’s good reason to think that if his intuitions are substantially bad he won’t make any AI. SI’s intuitions are about what? Handwaving about unbounded idealized models (‘utility maximizer’ taken way too literally, i guess once again because if you don’t understand algorithmic complexity you don’t understand how little relation between idealized model and practice there could be). Misunderstanding of how Solomonoff induction works (or what it even is). And so on.
I would guess they heard about it. maybe read a wiki article sometime, and that’s all. Link a statement by either where they post about AI having a limitation? What else they have? I only see handwaving, mosty indicative of them not really even knowing the words they use. If they had degrees I would have to assume they probably, sometime in the past, have passed an exam (which is not a good evidence of competence either, but at least is something).
edit: To clarify. I do not refer to “research associates”, see grandchild post.
Speaking of which, a couple days ago I noticed that a technobabble justification of atheism because theism fails “Solomonoff induction” which I seen before and determined to be complete idiocy (I am an atheist too) is by same Luke as at SIAI. Not only he doesn’t know what Solomonoff induction is, he also lacks the wits to know he doesn’t know.
He heard about it though, and uses it as technobabble slighty better than script writers would. Ultimately, SIAI people are very talented technobabble generators and that seem to be the extent of it. I’m not giving a slightest benefit of the doubt if I see that people do make technobabble. (I used to give in the past, which resulted in me reading sense into nonsense; because of ambiguity of human language, you can form sentences such that the statement is actually generated when one is reading your sentence, and you can do that without actually generating that statement yourself).
If you want to change my view, you better actually link some posts that are evidence for them knowing something instead of calling what i say a ‘rant’.
If they had degrees I would have to assume they probably, sometime in the past, have passed an exam (which is not a good evidence of competence either, but at least is something).
I count only 1 out of 11 SIAI researcher not having a degree. (Paul Christiano’s bio hasn’t been updated yet, but he told me he just graduated from MIT). Click these links if you want to check for yourself.
If you want to change my view, you better actually link some posts that are evidence for them knowing something instead of calling what i say a ‘rant’.
I no longer have much hope of changing your views, but rather want to encourage you to make some positive contributions (like your belief propagation graph idea) despite having views that I consider to be wrong. (I can’t resist pointing out some of the more blatant errors though, like the above.)
In resident faculty i see two people, Eliezer and someone with a degree in mathematics, unspecific, with two years of work somewhere else.
In the associates I see people whose extent of association or agreement with the position I do not know. edit: I know though that Ben been there, and Kurzweil too, people with very different views from that of SIAI, or now SI)
As the most publicly visible I see Luke and Eliezer. edit: to whom i refer as them, as the rest looks like replaceable chaff chosen for not disagreeing with the core.
Here’s one suggestion: focus on the causes of the intuition. If the intuition is based on something we would accept as rational evidence if it were suitably cleaned up and put into rigorous form, then we should regard that as an additional argument for whatever. If the intuition is based on subject matter we would disregard in other circumstances or flawed reasons, then we can regard that as evidence against the whatever.
This is a little abstract, so I’ll give a double example:
recently there’s been a lot of research into the origins of religious belief, focusing on intuitive versus analytical styles of thinking. To the extent that explicit analytical thought is superior at truth-gathering, we should take this as evidence for atheism and against theism.
This area of research has also focused on when religious belief develops, and there’s evidence that the core of religious belief is formed in childhood because children ascribe agency to all sorts of observations, while lack of agency is more a difficult learned adult way of thinking (and as things like the gambler’s fallacy show, is often not learned even then); to the extent that we trust adult thinking over childhood thinking, we will again regard this as evidence against theism and for atheism.
So, what is the origin of intuitions about things like AI and the future performance of machines...? (I’ll just note that I’ve seen a little evidence that young children are also vitalists.)
Hanson saying the same:
I’ve posted about that (as Dmytry), the belief propagation graph (which shows what paths can’t be the cause of intuitions due to too long propagation delay), that was one of the things which convinced me that trying to explain anything to LW is a waste of time, and that critique without explanation is more effective because explanatory critique gets rationalized away, while the critique of the form “you suck” makes people think (a little) what caused the impression in question and examine themselves somewhat, in the way in which they don’t if they are given actual, detailed explanation.
I’m curious if you think Ben’s beliefs about AI “benevolence” is likely to be more accurate than SIAI’s, and if so why. Can you make a similar graph for Ben Goertzel (or just give a verbal explanation if that’s more convenient)?
Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.
The graph for Ben would probably include more progression from nodes from the actual design that he has in mind—learning AI—and from computational complexity theory (for example i’m pretty sure Ben understands all those points about the prediction vs butterfly effect, the tasks that are exponential improving at most 2x when power is to mankind as mankind is to 1 amoeba, etc, it really is very elementary stuff). So would a graph of the people competent in that field. The Ben’s building human-like-enough AI. The SIAI is reinventing religion as far as i can see, there’s no attempts to try and see what limitations AI can have. Any technical counter argument is rationalized away, any pro argument, no matter how weak and how privileged it is as a hypothesis, or how vague, is taken as something which has to be conclusively disproved. The vague stuff has to be defined by whoever wants to disprove it. Same as for any religion really.
Yes, this did cause me to take him more seriously than before.
That doesn’t seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it’s unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)
I’m almost certain that Eliezer and other researchers at SIAI know computational complexity theory, but disagree with your application of it. The rest of your comment seems to be a rant against SIAI instead of comparing the sources of SIAI’s beliefs with Ben’s, so I’m not sure how they help to answer the question I asked.
Based on what you’ve written, I don’t see a reason to think Ben’s intuitions are much better than SI’s. Assuming, for the sake of argument, that Ben’s intuitions are somewhat, but not much, better, what do you think Ben, SI, and bystanders should each do at this point? For example should Ben keep trying to build OpenCog?
Note also that the meta is all that people behind SIAI have somewhat notable experience with (rationality studies). It is very bad sign they get beaten on meta by someone whom I previously evaluated as dramatically overoptimistic (in terms of AI’s abilities) AI developer.
That’s an evidence that Ben’s understanding is still not enough, and only evidence for SIAI being dramatically not enough.
Almost certain is an interesting thing here. See, because with every single other AI researcher that made something usable (bio-assay analysis by Ben), you can be way more certain. There is a lot of people in the world to pick from, and there will be a few for whom your ‘almost certain’ will fail. If you are discussing 1 person, if the choice of the person is not independent of failure of ‘almost certain’ (it is not independent if you pick by person’s opinion), then you may easily overestimate.
I think they are much further towards being better in the sense that everyone in SI probably can’t get there without spending a decade or two studying, but still ultimately way short of being any good. In any case keep in mind that Ben’s intuitions are about Ben’s project, coming from working on it, there’s good reason to think that if his intuitions are substantially bad he won’t make any AI. SI’s intuitions are about what? Handwaving about unbounded idealized models (‘utility maximizer’ taken way too literally, i guess once again because if you don’t understand algorithmic complexity you don’t understand how little relation between idealized model and practice there could be). Misunderstanding of how Solomonoff induction works (or what it even is). And so on.
I’m sure they know it. It’s just since they don’t do much actual coding, it’s not all that available to them.
I would guess they heard about it. maybe read a wiki article sometime, and that’s all. Link a statement by either where they post about AI having a limitation? What else they have? I only see handwaving, mosty indicative of them not really even knowing the words they use. If they had degrees I would have to assume they probably, sometime in the past, have passed an exam (which is not a good evidence of competence either, but at least is something).
edit: To clarify. I do not refer to “research associates”, see grandchild post.
Speaking of which, a couple days ago I noticed that a technobabble justification of atheism because theism fails “Solomonoff induction” which I seen before and determined to be complete idiocy (I am an atheist too) is by same Luke as at SIAI. Not only he doesn’t know what Solomonoff induction is, he also lacks the wits to know he doesn’t know.
He heard about it though, and uses it as technobabble slighty better than script writers would. Ultimately, SIAI people are very talented technobabble generators and that seem to be the extent of it. I’m not giving a slightest benefit of the doubt if I see that people do make technobabble. (I used to give in the past, which resulted in me reading sense into nonsense; because of ambiguity of human language, you can form sentences such that the statement is actually generated when one is reading your sentence, and you can do that without actually generating that statement yourself).
If you want to change my view, you better actually link some posts that are evidence for them knowing something instead of calling what i say a ‘rant’.
I count only 1 out of 11 SIAI researcher not having a degree. (Paul Christiano’s bio hasn’t been updated yet, but he told me he just graduated from MIT). Click these links if you want to check for yourself.
http://singinst.org/research/residentfaculty
http://singinst.org/aboutus/researchassociates
I no longer have much hope of changing your views, but rather want to encourage you to make some positive contributions (like your belief propagation graph idea) despite having views that I consider to be wrong. (I can’t resist pointing out some of the more blatant errors though, like the above.)
In resident faculty i see two people, Eliezer and someone with a degree in mathematics, unspecific, with two years of work somewhere else.
In the associates I see people whose extent of association or agreement with the position I do not know. edit: I know though that Ben been there, and Kurzweil too, people with very different views from that of SIAI, or now SI)
As the most publicly visible I see Luke and Eliezer. edit: to whom i refer as them, as the rest looks like replaceable chaff chosen for not disagreeing with the core.