I think most people on this site (including me and you, private messaging/Dmytry) don’t have any particular insight that gives them more information than those who seriously thought about this for a long time (like Eliezer, Ben Goertzel, Robin Hanson, Holden Karnofsky, Lukeprog, possibly Wei Dai, cousin_it, etc.), so our opinion on “who is right” is not worth much.
I’d much rather see an attempt to cleanly map out where knowledgeable people disagree, rather than polls of what ignorant people like me think.
Similarly, if two senior economists have a public disagreement about international trade and fiscal policy, a poll of a bunch of graduate students on those issues is not going to provide much new information to either economist.
(I don’t really know how to phrase this argument cleanly, help and suggestions welcome, I’m just trying to retranscribe my general feeling of “I don’t even know enough to answer, and I suspect neither to most people here”)
(I don’t really know how to phrase this argument cleanly, help and suggestions welcome, I’m just trying to retranscribe my general feeling of “I don’t even know enough to answer, and I suspect neither to most people here”)
I would phrase it as holding off judgement until we hear further information, i.e. SI’s response to this. And in addition to the reasons you give, not deciding who’s right ahead of time helps us avoid becoming attached to one side.
I think what’s needed isn’t further information as much as better intuitions, and getting those isn’t just a matter of reading SIAI’s response.
A bit like if there’s a big public disagreement between two primatologists that spent years working with chimps in Africa, about the best way to take a toy from a chimp without your arm getting ripped off. At least one of the primatologists is wrong, but even after hearing all of their arguments, a member of the uninformed public can’t really decide between them, because there positions are based on a bunch of intuitions that are very hard to communicate. Deciding “who is wrong” based on the public debate would be working from much less information than either of the parties (provided nobody appears obviously stupid or irrational or dishonest even to a member of the public).
People seem more ready to pontificate on AI and the future and morality than on chimpanzees, but I don’t think we should be. The best position for laymen on a topic on which experts disagree is one of uncertainty
The primatologists’ intuitions would probably stem from their direct observations of chimps. I would trust their intuitions much less if they were based on long serious thinking about primates without any observation, which is likely the more precise analogy of the positions held in the AI risk debate.
AGI research is not an altogether well-defined area. There are no well-established theorems, measurements, design insights, or the like. And there is plenty of overlap with other fields, such as theoretical computer science.
My impression is that many of the people commenting have enough of a computer science, engineering, or math background to be worth listening to.
The LW community takes Yudkowsky seriously when he talks about quantum mechanics—and indeed, he has cogent things to say. I think we ought to see who has something worth saying about AGI and risk.
The LW community takes Yudkowsky seriously when he talks about quantum mechanics—and indeed, he has cogent things to say. I think we ought to see who has something worth saying about AGI and risk.
He has found cogent things to repeat. Big difference. I knew of MWI long before I even heard of Eliezer, nothing he presents is new, and he doesn’t present any actual counter arguments and ways it may be false, so he deserves −1 for that and further discounting on anything he talks about, due to one sided presentation of personal beliefs. (The biggest issue i can see is that we need QM to result in GR at large scale, and we can’t figure how to do that. And so far as QM does not result in GR at large scale, it means what we know doesn’t work for massive objects(as matter of physical fact), which means we don’t know if there’s superposition of macroscopic states, or not)
Furthermore, if I needed someone to actually do any QM, as in, e.g. for semiconductors, or making a quantum computer, or the like, he would not get hired because he doesn’t really know anything from QM that is useful (and got phases wrong in his interferometer example but that’s a minor point).
He has found cogent things to repeat. Big difference.
Let’s stipulate that for a minute. I wasn’t making any claim about novelty: I just wanted to show that non-experts are sometimes able to make points worth listening to.
I think readers here on LW might have cogent things to repeat about AGI, and I urge them to do so in those cases, even if they aren’t working on the topic professionally.
Repeating cogent points is not automatically useful; an anti vaccination campaigner too can repeat some cogent things (for example it is the case that some vaccine preservatives really are toxic); the issue is in which things he chooses to repeat, and the unknown extent of cherry picking easily makes one not worth listening to (given that there is a huge number of sources to listen to).
The presentation of MWI issue is very biased and one sided. By the way, I have nothing against MWI; if I had to pick an interpretation I would pick MWI. (unless I actually need to calculate something, in which case, collapse as early as i can get away with).
Well, I spent many years of my life studying technical topics, and have certain technical accomplishments, so it is generally a bad strategy for me to assume superior knowledge for anyone who ‘thought longer’ about a subject; especially if it may be the case that it is not very hard to see that nothing conclusive could be concluded about the topic at the time, or the tool (mathematics) are not yet where they should be.
Furthermore, if I am to look at your list, those whom I disagree the most with (Eliezer, Lukeprog) appear to have least training in the subject matter (and jumped onto a very difficult subject without doing any notable training with good feedback). Lukeprog in particular has been doing theology till age of 22 and inevitably picked up a plenty of bad habits of thought; if I were him I would stay clear of things that can trigger old conditioned theist instincts, leaving those perhaps to people who never had such instincts conditioned into them in the first place. (I originally thought Luke was making BS but it seems to me now he is only still acting as a vehicle for the religious BS and could improve over the time)
I think most people on this site (including me and you, private messaging/Dmytry) don’t have any particular insight that gives them more information than those who seriously thought about this for a long time (like Eliezer, Ben Goertzel, Robin Hanson, Holden Karnofsky, Lukeprog, possibly Wei Dai, cousin_it, etc.), so our opinion on “who is right” is not worth much.
I’d much rather see an attempt to cleanly map out where knowledgeable people disagree, rather than polls of what ignorant people like me think.
Similarly, if two senior economists have a public disagreement about international trade and fiscal policy, a poll of a bunch of graduate students on those issues is not going to provide much new information to either economist.
(I don’t really know how to phrase this argument cleanly, help and suggestions welcome, I’m just trying to retranscribe my general feeling of “I don’t even know enough to answer, and I suspect neither to most people here”)
I would phrase it as holding off judgement until we hear further information, i.e. SI’s response to this. And in addition to the reasons you give, not deciding who’s right ahead of time helps us avoid becoming attached to one side.
I think what’s needed isn’t further information as much as better intuitions, and getting those isn’t just a matter of reading SIAI’s response.
A bit like if there’s a big public disagreement between two primatologists that spent years working with chimps in Africa, about the best way to take a toy from a chimp without your arm getting ripped off. At least one of the primatologists is wrong, but even after hearing all of their arguments, a member of the uninformed public can’t really decide between them, because there positions are based on a bunch of intuitions that are very hard to communicate. Deciding “who is wrong” based on the public debate would be working from much less information than either of the parties (provided nobody appears obviously stupid or irrational or dishonest even to a member of the public).
People seem more ready to pontificate on AI and the future and morality than on chimpanzees, but I don’t think we should be. The best position for laymen on a topic on which experts disagree is one of uncertainty
The primatologists’ intuitions would probably stem from their direct observations of chimps. I would trust their intuitions much less if they were based on long serious thinking about primates without any observation, which is likely the more precise analogy of the positions held in the AI risk debate.
AGI research is not an altogether well-defined area. There are no well-established theorems, measurements, design insights, or the like. And there is plenty of overlap with other fields, such as theoretical computer science.
My impression is that many of the people commenting have enough of a computer science, engineering, or math background to be worth listening to.
The LW community takes Yudkowsky seriously when he talks about quantum mechanics—and indeed, he has cogent things to say. I think we ought to see who has something worth saying about AGI and risk.
He has found cogent things to repeat. Big difference. I knew of MWI long before I even heard of Eliezer, nothing he presents is new, and he doesn’t present any actual counter arguments and ways it may be false, so he deserves −1 for that and further discounting on anything he talks about, due to one sided presentation of personal beliefs. (The biggest issue i can see is that we need QM to result in GR at large scale, and we can’t figure how to do that. And so far as QM does not result in GR at large scale, it means what we know doesn’t work for massive objects(as matter of physical fact), which means we don’t know if there’s superposition of macroscopic states, or not)
Furthermore, if I needed someone to actually do any QM, as in, e.g. for semiconductors, or making a quantum computer, or the like, he would not get hired because he doesn’t really know anything from QM that is useful (and got phases wrong in his interferometer example but that’s a minor point).
Let’s stipulate that for a minute. I wasn’t making any claim about novelty: I just wanted to show that non-experts are sometimes able to make points worth listening to.
I think readers here on LW might have cogent things to repeat about AGI, and I urge them to do so in those cases, even if they aren’t working on the topic professionally.
Make again implies creation.
Repeating cogent points is not automatically useful; an anti vaccination campaigner too can repeat some cogent things (for example it is the case that some vaccine preservatives really are toxic); the issue is in which things he chooses to repeat, and the unknown extent of cherry picking easily makes one not worth listening to (given that there is a huge number of sources to listen to).
The presentation of MWI issue is very biased and one sided. By the way, I have nothing against MWI; if I had to pick an interpretation I would pick MWI. (unless I actually need to calculate something, in which case, collapse as early as i can get away with).
Well, I spent many years of my life studying technical topics, and have certain technical accomplishments, so it is generally a bad strategy for me to assume superior knowledge for anyone who ‘thought longer’ about a subject; especially if it may be the case that it is not very hard to see that nothing conclusive could be concluded about the topic at the time, or the tool (mathematics) are not yet where they should be.
Furthermore, if I am to look at your list, those whom I disagree the most with (Eliezer, Lukeprog) appear to have least training in the subject matter (and jumped onto a very difficult subject without doing any notable training with good feedback). Lukeprog in particular has been doing theology till age of 22 and inevitably picked up a plenty of bad habits of thought; if I were him I would stay clear of things that can trigger old conditioned theist instincts, leaving those perhaps to people who never had such instincts conditioned into them in the first place. (I originally thought Luke was making BS but it seems to me now he is only still acting as a vehicle for the religious BS and could improve over the time)