(I don’t really know how to phrase this argument cleanly, help and suggestions welcome, I’m just trying to retranscribe my general feeling of “I don’t even know enough to answer, and I suspect neither to most people here”)
I would phrase it as holding off judgement until we hear further information, i.e. SI’s response to this. And in addition to the reasons you give, not deciding who’s right ahead of time helps us avoid becoming attached to one side.
I think what’s needed isn’t further information as much as better intuitions, and getting those isn’t just a matter of reading SIAI’s response.
A bit like if there’s a big public disagreement between two primatologists that spent years working with chimps in Africa, about the best way to take a toy from a chimp without your arm getting ripped off. At least one of the primatologists is wrong, but even after hearing all of their arguments, a member of the uninformed public can’t really decide between them, because there positions are based on a bunch of intuitions that are very hard to communicate. Deciding “who is wrong” based on the public debate would be working from much less information than either of the parties (provided nobody appears obviously stupid or irrational or dishonest even to a member of the public).
People seem more ready to pontificate on AI and the future and morality than on chimpanzees, but I don’t think we should be. The best position for laymen on a topic on which experts disagree is one of uncertainty
The primatologists’ intuitions would probably stem from their direct observations of chimps. I would trust their intuitions much less if they were based on long serious thinking about primates without any observation, which is likely the more precise analogy of the positions held in the AI risk debate.
AGI research is not an altogether well-defined area. There are no well-established theorems, measurements, design insights, or the like. And there is plenty of overlap with other fields, such as theoretical computer science.
My impression is that many of the people commenting have enough of a computer science, engineering, or math background to be worth listening to.
The LW community takes Yudkowsky seriously when he talks about quantum mechanics—and indeed, he has cogent things to say. I think we ought to see who has something worth saying about AGI and risk.
The LW community takes Yudkowsky seriously when he talks about quantum mechanics—and indeed, he has cogent things to say. I think we ought to see who has something worth saying about AGI and risk.
He has found cogent things to repeat. Big difference. I knew of MWI long before I even heard of Eliezer, nothing he presents is new, and he doesn’t present any actual counter arguments and ways it may be false, so he deserves −1 for that and further discounting on anything he talks about, due to one sided presentation of personal beliefs. (The biggest issue i can see is that we need QM to result in GR at large scale, and we can’t figure how to do that. And so far as QM does not result in GR at large scale, it means what we know doesn’t work for massive objects(as matter of physical fact), which means we don’t know if there’s superposition of macroscopic states, or not)
Furthermore, if I needed someone to actually do any QM, as in, e.g. for semiconductors, or making a quantum computer, or the like, he would not get hired because he doesn’t really know anything from QM that is useful (and got phases wrong in his interferometer example but that’s a minor point).
He has found cogent things to repeat. Big difference.
Let’s stipulate that for a minute. I wasn’t making any claim about novelty: I just wanted to show that non-experts are sometimes able to make points worth listening to.
I think readers here on LW might have cogent things to repeat about AGI, and I urge them to do so in those cases, even if they aren’t working on the topic professionally.
Repeating cogent points is not automatically useful; an anti vaccination campaigner too can repeat some cogent things (for example it is the case that some vaccine preservatives really are toxic); the issue is in which things he chooses to repeat, and the unknown extent of cherry picking easily makes one not worth listening to (given that there is a huge number of sources to listen to).
The presentation of MWI issue is very biased and one sided. By the way, I have nothing against MWI; if I had to pick an interpretation I would pick MWI. (unless I actually need to calculate something, in which case, collapse as early as i can get away with).
I would phrase it as holding off judgement until we hear further information, i.e. SI’s response to this. And in addition to the reasons you give, not deciding who’s right ahead of time helps us avoid becoming attached to one side.
I think what’s needed isn’t further information as much as better intuitions, and getting those isn’t just a matter of reading SIAI’s response.
A bit like if there’s a big public disagreement between two primatologists that spent years working with chimps in Africa, about the best way to take a toy from a chimp without your arm getting ripped off. At least one of the primatologists is wrong, but even after hearing all of their arguments, a member of the uninformed public can’t really decide between them, because there positions are based on a bunch of intuitions that are very hard to communicate. Deciding “who is wrong” based on the public debate would be working from much less information than either of the parties (provided nobody appears obviously stupid or irrational or dishonest even to a member of the public).
People seem more ready to pontificate on AI and the future and morality than on chimpanzees, but I don’t think we should be. The best position for laymen on a topic on which experts disagree is one of uncertainty
The primatologists’ intuitions would probably stem from their direct observations of chimps. I would trust their intuitions much less if they were based on long serious thinking about primates without any observation, which is likely the more precise analogy of the positions held in the AI risk debate.
AGI research is not an altogether well-defined area. There are no well-established theorems, measurements, design insights, or the like. And there is plenty of overlap with other fields, such as theoretical computer science.
My impression is that many of the people commenting have enough of a computer science, engineering, or math background to be worth listening to.
The LW community takes Yudkowsky seriously when he talks about quantum mechanics—and indeed, he has cogent things to say. I think we ought to see who has something worth saying about AGI and risk.
He has found cogent things to repeat. Big difference. I knew of MWI long before I even heard of Eliezer, nothing he presents is new, and he doesn’t present any actual counter arguments and ways it may be false, so he deserves −1 for that and further discounting on anything he talks about, due to one sided presentation of personal beliefs. (The biggest issue i can see is that we need QM to result in GR at large scale, and we can’t figure how to do that. And so far as QM does not result in GR at large scale, it means what we know doesn’t work for massive objects(as matter of physical fact), which means we don’t know if there’s superposition of macroscopic states, or not)
Furthermore, if I needed someone to actually do any QM, as in, e.g. for semiconductors, or making a quantum computer, or the like, he would not get hired because he doesn’t really know anything from QM that is useful (and got phases wrong in his interferometer example but that’s a minor point).
Let’s stipulate that for a minute. I wasn’t making any claim about novelty: I just wanted to show that non-experts are sometimes able to make points worth listening to.
I think readers here on LW might have cogent things to repeat about AGI, and I urge them to do so in those cases, even if they aren’t working on the topic professionally.
Make again implies creation.
Repeating cogent points is not automatically useful; an anti vaccination campaigner too can repeat some cogent things (for example it is the case that some vaccine preservatives really are toxic); the issue is in which things he chooses to repeat, and the unknown extent of cherry picking easily makes one not worth listening to (given that there is a huge number of sources to listen to).
The presentation of MWI issue is very biased and one sided. By the way, I have nothing against MWI; if I had to pick an interpretation I would pick MWI. (unless I actually need to calculate something, in which case, collapse as early as i can get away with).