AGI research is not an altogether well-defined area. There are no well-established theorems, measurements, design insights, or the like. And there is plenty of overlap with other fields, such as theoretical computer science.
My impression is that many of the people commenting have enough of a computer science, engineering, or math background to be worth listening to.
The LW community takes Yudkowsky seriously when he talks about quantum mechanics—and indeed, he has cogent things to say. I think we ought to see who has something worth saying about AGI and risk.
The LW community takes Yudkowsky seriously when he talks about quantum mechanics—and indeed, he has cogent things to say. I think we ought to see who has something worth saying about AGI and risk.
He has found cogent things to repeat. Big difference. I knew of MWI long before I even heard of Eliezer, nothing he presents is new, and he doesn’t present any actual counter arguments and ways it may be false, so he deserves −1 for that and further discounting on anything he talks about, due to one sided presentation of personal beliefs. (The biggest issue i can see is that we need QM to result in GR at large scale, and we can’t figure how to do that. And so far as QM does not result in GR at large scale, it means what we know doesn’t work for massive objects(as matter of physical fact), which means we don’t know if there’s superposition of macroscopic states, or not)
Furthermore, if I needed someone to actually do any QM, as in, e.g. for semiconductors, or making a quantum computer, or the like, he would not get hired because he doesn’t really know anything from QM that is useful (and got phases wrong in his interferometer example but that’s a minor point).
He has found cogent things to repeat. Big difference.
Let’s stipulate that for a minute. I wasn’t making any claim about novelty: I just wanted to show that non-experts are sometimes able to make points worth listening to.
I think readers here on LW might have cogent things to repeat about AGI, and I urge them to do so in those cases, even if they aren’t working on the topic professionally.
Repeating cogent points is not automatically useful; an anti vaccination campaigner too can repeat some cogent things (for example it is the case that some vaccine preservatives really are toxic); the issue is in which things he chooses to repeat, and the unknown extent of cherry picking easily makes one not worth listening to (given that there is a huge number of sources to listen to).
The presentation of MWI issue is very biased and one sided. By the way, I have nothing against MWI; if I had to pick an interpretation I would pick MWI. (unless I actually need to calculate something, in which case, collapse as early as i can get away with).
AGI research is not an altogether well-defined area. There are no well-established theorems, measurements, design insights, or the like. And there is plenty of overlap with other fields, such as theoretical computer science.
My impression is that many of the people commenting have enough of a computer science, engineering, or math background to be worth listening to.
The LW community takes Yudkowsky seriously when he talks about quantum mechanics—and indeed, he has cogent things to say. I think we ought to see who has something worth saying about AGI and risk.
He has found cogent things to repeat. Big difference. I knew of MWI long before I even heard of Eliezer, nothing he presents is new, and he doesn’t present any actual counter arguments and ways it may be false, so he deserves −1 for that and further discounting on anything he talks about, due to one sided presentation of personal beliefs. (The biggest issue i can see is that we need QM to result in GR at large scale, and we can’t figure how to do that. And so far as QM does not result in GR at large scale, it means what we know doesn’t work for massive objects(as matter of physical fact), which means we don’t know if there’s superposition of macroscopic states, or not)
Furthermore, if I needed someone to actually do any QM, as in, e.g. for semiconductors, or making a quantum computer, or the like, he would not get hired because he doesn’t really know anything from QM that is useful (and got phases wrong in his interferometer example but that’s a minor point).
Let’s stipulate that for a minute. I wasn’t making any claim about novelty: I just wanted to show that non-experts are sometimes able to make points worth listening to.
I think readers here on LW might have cogent things to repeat about AGI, and I urge them to do so in those cases, even if they aren’t working on the topic professionally.
Make again implies creation.
Repeating cogent points is not automatically useful; an anti vaccination campaigner too can repeat some cogent things (for example it is the case that some vaccine preservatives really are toxic); the issue is in which things he chooses to repeat, and the unknown extent of cherry picking easily makes one not worth listening to (given that there is a huge number of sources to listen to).
The presentation of MWI issue is very biased and one sided. By the way, I have nothing against MWI; if I had to pick an interpretation I would pick MWI. (unless I actually need to calculate something, in which case, collapse as early as i can get away with).