AFAIK Eliezer and other folks at MIRI would be reluctant to publish some results. Did you consider the danger of some hobbyst implementing an AI that goes out of control?
None of the FAI research I’ve seen discussed here really relates to implementing an FAI, it’s more about constraining the behavior of a utility-function-optimizer and decision-making consistent under self-modification.
The danger of a hobbyist building an out of control AI is incredibly tiny, and would not be increased by this sort of work.
The danger is realized by what happens eventually. After decades of additional progress in theory and tools, it might become plausible for someone to build an AGI as a personal project (even if it’s not a particularly probable outcome, compared to alternatives). This progress is built step by step, and even though each step may be small, eventual danger is made out of such steps, the only thing that causes the danger are the steps.
The possible harm of individual steps is in what kind of differential intellectual progress they represent. You can’t stop progress, others will take the steps you avoid, but you can make some of the steps come earlier by taking them yourself before anyone else does, and you can choose what kind of steps these are: those that move you closer to the cliff, or those that teach you to fly.
I cannot imagine the circumstances under which a stray hobbyist would be able to beat a massive government or corporate effort to the punch in AI. The imbalance of resources is simply too great. Concern yourself with what the goals and methodologies of those large corporate or government efforts should look like.
The possible harm of individual steps is in what kind of differential intellectual progress they represent.
I get the concept, but I am totally unconvinced that anything MIRI is putting out could increase x-risk; in fact I think it’s wildly improbable that any research of any organization today could lower AI related x-risk with decent odds. We’re so far from real AI that it’s like if Ernest Rutherford were trying to direct the eventual weaponization of his discoveries.
Also, if MIRI actually were sitting on something they’d researched because of supposed potential x-risk increase, I’d take it substantially less seriously as a research organization.
Also, if MIRI actually were sitting on something they’d researched because of supposed potential x-risk increase, I’d take it substantially less seriously as a research organization.
Really? It seems to me that they bring up the possibility more often than they would if it was a problem they’d never actually encountered before. Then again, it’s possible that they’re playing one level higher than that, or just being typically precautionary (in which case I say kudos to them for taking the precaution).
It seems to me that it works more as a cop out when people accuse(d?) them of not publishing much: “We are doing research, we promise, but it is just too dangerous for the uninitiated, humanity is not ready and stuff.”
Then there is the stuff they try to censor because they believe that merely thinking about these ideas would drive you crazy (even if they are the only known people exposed to such ideas who apparently had a mental breakdown).
In fariness they seem to have got better in the last year or so.
It seems to me that it works more as a cop out when people accuse(d?) them of not publishing much: “We are doing research, we promise, but it is just too dangerous for the uninitiated, humanity is not ready and stuff.”
I don’t think I’ve ever seen them use it as an excuse like that. And I wouldn’t expect them to, since as far as I know(!) they focus mainly on pure FAI stuff like formal verification or whatever. And I can’t imagine that they would lie to claim that they have secrets that would help build an AGI but which they can’t release for humanity’s sake—it would make them sound silly, and possibly even make them a target (although I admit this is far-fetched).
As for basilisks, I abide by the principle that if a highly intelligent person or group of people with professional expertise in field X say “this thing related to X is dangerous to do”, I will at least try to supress my “push the button” impulse. Even if I’m pretty sure that they’re wrong, by my own judgement
I think the whole basilisk thing is not (entirely) because it is “dangerous” but because they don’t want to be accused of extorting more fragile or gullible visitors. Think about it this way—if EY went full L Ron Hubbard, wouldn’t the basilisk be one of his primary tools of cult control?
That said, I consider the whole thing pretty silly.
Well, but EY is still using arguments in the form “donate to us or the future of the Galactic Civilization is at risk”, I don’t thik the Basilisk would make much difference. If anything, EY could just declare the Basilisk invalid. His behavior is not consistent with him beliving the argument.
Well, still better than “donate to us or you’ll go to hell”.
How about “don’t donate to them or you’ll go to hell”. That’s what they fear. Think about it, who is more likely to exist, according to their beliefs, Satan or God? And would Satan have a problem with using such tactics in order to make people dismiss God?
Getting better on the basilisk stuff probably has more to do with censorship being wildly counterproductive than changes on the estimated level of danger.
Unless you can show that any of the steps you are proposing to constrain has a non-negligible chance of causing some real damage, this is a Pascal’s mugging type of argument.
Yes, I considered it. Ultimately we will need to establish ground rules on what is safe for publication and what isn’t. Presumably information which isn’t safe for publication will only be accessible to a selected group of individuals. Maybe MIRI already has such a repository, I can’t tell. Nevertheless, MIRI publishes much (most? all?) of its research.
I believe that the benefits of having a discussion group about FAI outweigh the risks by much. For one thing, much of the stuff we will discuss will only/mostly be useful to FAI as opposed to UFAI. For another, we need more people working on the problem. For yet another, everything published on the group will be mostly consumed by people interested in FAI so usually it will increase the probability of FAI construction more than the probability of UFAI construction.
AFAIK Eliezer and other folks at MIRI would be reluctant to publish some results. Did you consider the danger of some hobbyst implementing an AI that goes out of control?
None of the FAI research I’ve seen discussed here really relates to implementing an FAI, it’s more about constraining the behavior of a utility-function-optimizer and decision-making consistent under self-modification.
The danger of a hobbyist building an out of control AI is incredibly tiny, and would not be increased by this sort of work.
The danger is realized by what happens eventually. After decades of additional progress in theory and tools, it might become plausible for someone to build an AGI as a personal project (even if it’s not a particularly probable outcome, compared to alternatives). This progress is built step by step, and even though each step may be small, eventual danger is made out of such steps, the only thing that causes the danger are the steps.
The possible harm of individual steps is in what kind of differential intellectual progress they represent. You can’t stop progress, others will take the steps you avoid, but you can make some of the steps come earlier by taking them yourself before anyone else does, and you can choose what kind of steps these are: those that move you closer to the cliff, or those that teach you to fly.
I cannot imagine the circumstances under which a stray hobbyist would be able to beat a massive government or corporate effort to the punch in AI. The imbalance of resources is simply too great. Concern yourself with what the goals and methodologies of those large corporate or government efforts should look like.
I get the concept, but I am totally unconvinced that anything MIRI is putting out could increase x-risk; in fact I think it’s wildly improbable that any research of any organization today could lower AI related x-risk with decent odds. We’re so far from real AI that it’s like if Ernest Rutherford were trying to direct the eventual weaponization of his discoveries.
Also, if MIRI actually were sitting on something they’d researched because of supposed potential x-risk increase, I’d take it substantially less seriously as a research organization.
Really? It seems to me that they bring up the possibility more often than they would if it was a problem they’d never actually encountered before. Then again, it’s possible that they’re playing one level higher than that, or just being typically precautionary (in which case I say kudos to them for taking the precaution).
It seems to me that it works more as a cop out when people accuse(d?) them of not publishing much: “We are doing research, we promise, but it is just too dangerous for the uninitiated, humanity is not ready and stuff.”
Then there is the stuff they try to censor because they believe that merely thinking about these ideas would drive you crazy (even if they are the only known people exposed to such ideas who apparently had a mental breakdown).
In fariness they seem to have got better in the last year or so.
I don’t think I’ve ever seen them use it as an excuse like that. And I wouldn’t expect them to, since as far as I know(!) they focus mainly on pure FAI stuff like formal verification or whatever. And I can’t imagine that they would lie to claim that they have secrets that would help build an AGI but which they can’t release for humanity’s sake—it would make them sound silly, and possibly even make them a target (although I admit this is far-fetched).
As for basilisks, I abide by the principle that if a highly intelligent person or group of people with professional expertise in field X say “this thing related to X is dangerous to do”, I will at least try to supress my “push the button” impulse. Even if I’m pretty sure that they’re wrong, by my own judgement
I think the whole basilisk thing is not (entirely) because it is “dangerous” but because they don’t want to be accused of extorting more fragile or gullible visitors. Think about it this way—if EY went full L Ron Hubbard, wouldn’t the basilisk be one of his primary tools of cult control?
That said, I consider the whole thing pretty silly.
Well, but EY is still using arguments in the form “donate to us or the future of the Galactic Civilization is at risk”, I don’t thik the Basilisk would make much difference. If anything, EY could just declare the Basilisk invalid. His behavior is not consistent with him beliving the argument.
Well, still better than “donate to us or you’ll go to hell”.
How about “don’t donate to them or you’ll go to hell”. That’s what they fear. Think about it, who is more likely to exist, according to their beliefs, Satan or God? And would Satan have a problem with using such tactics in order to make people dismiss God?
ph’nglui mglw’nafh UFAI R’lyeh wgah’nagl fhtagn
Getting better on the basilisk stuff probably has more to do with censorship being wildly counterproductive than changes on the estimated level of danger.
Unless you can show that any of the steps you are proposing to constrain has a non-negligible chance of causing some real damage, this is a Pascal’s mugging type of argument.
Hi roland, thx for commenting!
Yes, I considered it. Ultimately we will need to establish ground rules on what is safe for publication and what isn’t. Presumably information which isn’t safe for publication will only be accessible to a selected group of individuals. Maybe MIRI already has such a repository, I can’t tell. Nevertheless, MIRI publishes much (most? all?) of its research.
I believe that the benefits of having a discussion group about FAI outweigh the risks by much. For one thing, much of the stuff we will discuss will only/mostly be useful to FAI as opposed to UFAI. For another, we need more people working on the problem. For yet another, everything published on the group will be mostly consumed by people interested in FAI so usually it will increase the probability of FAI construction more than the probability of UFAI construction.
LOL. Would you care to assign a probability to that? X-D