but because of what their inability to stop or take any serious precautions, despite their belief that they are about to create AGI, tells us about human nature.
Are these in any way a representative sample of normal humans? In order to be in this category one generally needs to be pretty high on the crank scale along with some healthy Dunning-Kruger issues.
That’s always been the argument that future AGI scientists won’t be as crazy as the lunatics presently doing it—that the current crowd of researchers are self-selected for incaution—but I wouldn’t put too much weight on that; it seems like a very human behavior, some of the smarter ones with millions of dollars don’t seem of below-average competence in any other way, and the VCs funding them are similarly incapable of backing off even when they say they expect human-level AGI to be created.
Sorry, I’m confused. By “people like this” did you mean people like FinalState or did you mean professional AI researchers? I interpreted it as the first.
Before people downvote PM’s comment above, note that Eliezer’s comment prior to editing was a hierarchy of different AI researchers with lowest being people like FinalState, the second highest being professional AI researchers and the highest being “top AI researchers”.
With that out of the way, what do you think you are accomplishing with this remark? You have a variety of valid points to make, but I fail to see what is contained in this remark that does anything at all.
Me or Eliezer? I’m making some point by direct demonstration. It’s a popular ranking system, ya know? He used it on FinalState. A lot of people use it on him.
Are these in any way a representative sample of normal humans? In order to be in this category one generally needs to be pretty high on the crank scale along with some healthy Dunning-Kruger issues.
That’s always been the argument that future AGI scientists won’t be as crazy as the lunatics presently doing it—that the current crowd of researchers are self-selected for incaution—but I wouldn’t put too much weight on that; it seems like a very human behavior, some of the smarter ones with millions of dollars don’t seem of below-average competence in any other way, and the VCs funding them are similarly incapable of backing off even when they say they expect human-level AGI to be created.
Sorry, I’m confused. By “people like this” did you mean people like FinalState or did you mean professional AI researchers? I interpreted it as the first.
AGI researchers sound a lot like FinalState when they think they’ll have AGI cracked in two years.
Eliezer < anyone with actual notable accomplishments. edit: damn it you edited your message.
Over 140 posts and 0 total karma; that’s persistence.
private_messaging says he’s Dmytry, who has positive karma. It’s possible that the more anonymous-sounding name encourages worse behaviour though.
Before people downvote PM’s comment above, note that Eliezer’s comment prior to editing was a hierarchy of different AI researchers with lowest being people like FinalState, the second highest being professional AI researchers and the highest being “top AI researchers”.
With that out of the way, what do you think you are accomplishing with this remark? You have a variety of valid points to make, but I fail to see what is contained in this remark that does anything at all.
Me or Eliezer? I’m making some point by direct demonstration. It’s a popular ranking system, ya know? He used it on FinalState. A lot of people use it on him.
There’s got to be a level beyond “arguments as soldiers” to describe your current approach to ineffective contrarianism.
I volunteer “arguments as cannon fodder.”