I cannot imagine the circumstances under which a stray hobbyist would be able to beat a massive government or corporate effort to the punch in AI. The imbalance of resources is simply too great. Concern yourself with what the goals and methodologies of those large corporate or government efforts should look like.
The possible harm of individual steps is in what kind of differential intellectual progress they represent.
I get the concept, but I am totally unconvinced that anything MIRI is putting out could increase x-risk; in fact I think it’s wildly improbable that any research of any organization today could lower AI related x-risk with decent odds. We’re so far from real AI that it’s like if Ernest Rutherford were trying to direct the eventual weaponization of his discoveries.
Also, if MIRI actually were sitting on something they’d researched because of supposed potential x-risk increase, I’d take it substantially less seriously as a research organization.
Also, if MIRI actually were sitting on something they’d researched because of supposed potential x-risk increase, I’d take it substantially less seriously as a research organization.
Really? It seems to me that they bring up the possibility more often than they would if it was a problem they’d never actually encountered before. Then again, it’s possible that they’re playing one level higher than that, or just being typically precautionary (in which case I say kudos to them for taking the precaution).
It seems to me that it works more as a cop out when people accuse(d?) them of not publishing much: “We are doing research, we promise, but it is just too dangerous for the uninitiated, humanity is not ready and stuff.”
Then there is the stuff they try to censor because they believe that merely thinking about these ideas would drive you crazy (even if they are the only known people exposed to such ideas who apparently had a mental breakdown).
In fariness they seem to have got better in the last year or so.
It seems to me that it works more as a cop out when people accuse(d?) them of not publishing much: “We are doing research, we promise, but it is just too dangerous for the uninitiated, humanity is not ready and stuff.”
I don’t think I’ve ever seen them use it as an excuse like that. And I wouldn’t expect them to, since as far as I know(!) they focus mainly on pure FAI stuff like formal verification or whatever. And I can’t imagine that they would lie to claim that they have secrets that would help build an AGI but which they can’t release for humanity’s sake—it would make them sound silly, and possibly even make them a target (although I admit this is far-fetched).
As for basilisks, I abide by the principle that if a highly intelligent person or group of people with professional expertise in field X say “this thing related to X is dangerous to do”, I will at least try to supress my “push the button” impulse. Even if I’m pretty sure that they’re wrong, by my own judgement
I think the whole basilisk thing is not (entirely) because it is “dangerous” but because they don’t want to be accused of extorting more fragile or gullible visitors. Think about it this way—if EY went full L Ron Hubbard, wouldn’t the basilisk be one of his primary tools of cult control?
That said, I consider the whole thing pretty silly.
Well, but EY is still using arguments in the form “donate to us or the future of the Galactic Civilization is at risk”, I don’t thik the Basilisk would make much difference. If anything, EY could just declare the Basilisk invalid. His behavior is not consistent with him beliving the argument.
Well, still better than “donate to us or you’ll go to hell”.
How about “don’t donate to them or you’ll go to hell”. That’s what they fear. Think about it, who is more likely to exist, according to their beliefs, Satan or God? And would Satan have a problem with using such tactics in order to make people dismiss God?
Getting better on the basilisk stuff probably has more to do with censorship being wildly counterproductive than changes on the estimated level of danger.
I cannot imagine the circumstances under which a stray hobbyist would be able to beat a massive government or corporate effort to the punch in AI. The imbalance of resources is simply too great. Concern yourself with what the goals and methodologies of those large corporate or government efforts should look like.
I get the concept, but I am totally unconvinced that anything MIRI is putting out could increase x-risk; in fact I think it’s wildly improbable that any research of any organization today could lower AI related x-risk with decent odds. We’re so far from real AI that it’s like if Ernest Rutherford were trying to direct the eventual weaponization of his discoveries.
Also, if MIRI actually were sitting on something they’d researched because of supposed potential x-risk increase, I’d take it substantially less seriously as a research organization.
Really? It seems to me that they bring up the possibility more often than they would if it was a problem they’d never actually encountered before. Then again, it’s possible that they’re playing one level higher than that, or just being typically precautionary (in which case I say kudos to them for taking the precaution).
It seems to me that it works more as a cop out when people accuse(d?) them of not publishing much: “We are doing research, we promise, but it is just too dangerous for the uninitiated, humanity is not ready and stuff.”
Then there is the stuff they try to censor because they believe that merely thinking about these ideas would drive you crazy (even if they are the only known people exposed to such ideas who apparently had a mental breakdown).
In fariness they seem to have got better in the last year or so.
I don’t think I’ve ever seen them use it as an excuse like that. And I wouldn’t expect them to, since as far as I know(!) they focus mainly on pure FAI stuff like formal verification or whatever. And I can’t imagine that they would lie to claim that they have secrets that would help build an AGI but which they can’t release for humanity’s sake—it would make them sound silly, and possibly even make them a target (although I admit this is far-fetched).
As for basilisks, I abide by the principle that if a highly intelligent person or group of people with professional expertise in field X say “this thing related to X is dangerous to do”, I will at least try to supress my “push the button” impulse. Even if I’m pretty sure that they’re wrong, by my own judgement
I think the whole basilisk thing is not (entirely) because it is “dangerous” but because they don’t want to be accused of extorting more fragile or gullible visitors. Think about it this way—if EY went full L Ron Hubbard, wouldn’t the basilisk be one of his primary tools of cult control?
That said, I consider the whole thing pretty silly.
Well, but EY is still using arguments in the form “donate to us or the future of the Galactic Civilization is at risk”, I don’t thik the Basilisk would make much difference. If anything, EY could just declare the Basilisk invalid. His behavior is not consistent with him beliving the argument.
Well, still better than “donate to us or you’ll go to hell”.
How about “don’t donate to them or you’ll go to hell”. That’s what they fear. Think about it, who is more likely to exist, according to their beliefs, Satan or God? And would Satan have a problem with using such tactics in order to make people dismiss God?
ph’nglui mglw’nafh UFAI R’lyeh wgah’nagl fhtagn
Getting better on the basilisk stuff probably has more to do with censorship being wildly counterproductive than changes on the estimated level of danger.