You can, hypothetically, build some pretty different interacting systems of ML programs inside the VM I’ve been building, that has not gotten a lot of interest. I’ve been thinking about it a fair bit recently.
But I think the general case still stands. How would someone who has made an AGI breakthrough convince the AGI risk community with out building it?
How would someone who has made an AGI breakthrough convince the AGI risk community with out building it?
In the usual way someone who has made a breakthrough convinces others. Reputation helps. Whitepapers help. Toy examples help. Etc., etc.
I don’t understand the context, however. That someone, how does he know it’s a breakthrough without testing it out? And why would he be so concerned with the opinion of the AI risk community (which isn’t exactly held in high regard by most working AI researchers)?
Okay. A good metaphor might be the development of the atomic bomb. Lots of nuclear physicists thought that nuclear reactions couldn’t be used for useful energy (e.g. Rutherford). Leo Szilard had the insight that you could do a chain reaction and that this might be dangerous. He did not build it the bomb (he could not, he didn’t know about neutrons) and assigned the patent to the admiralty to keep it secret.
But he managed to convince other high profile physicists that it might dangerous without publicizing it too much (no whitepapers etc). He had the reputation etc and the physics of these things was far more firm than our whispy grasp of intelligence.
So that worked.
But how will it work for our hypothetical AI researcher who has the breakthrough, if they are not part of the in group of ai risk people? They might be chinese and not have a good grasp on english. They are motivated to try and get the word to say Elon Musk (or another influential concerned person/group that might be able to develop it safely) of their breakthrough but want to keep the idea as secret as possible and do not have the pathway of reaching them.
One issue is that you’re judging the idea of a chain reaction as a breakthrough post factum. At the time, it was just a hypothesis, interesting but unproven. I don’t know the history of nuclear physics well enough, but I suspect there were other hypotheses, also quite interesting, which didn’t pan out and we forgot about them.
A breakthrough idea is by definition weird and doesn’t fit into the current paradigm. At the time it’s proposed, it is difficult to separate real breakthroughs from unworkable craziness unless you can demonstrate that your breakthrough idea actually works in reality. And if you can’t—well, absent a robust theoretical proof, you will just have to be very convincing: we’re back to the usual methods mentioned above (reputation, etc.).
Claimed breakthroughs sometimes are real and sometimes are not (e.g. cold fusion). I suspect the base rates will create a prior not favourable to accepting a breakthrough as real.
It was interesting enough that a letter got sent to the president by Einstein about it which was taken seriously, before it was made. I recommend reading up about it, it is a very interesting time in history,
It would be interesting to know how many other potential breakthroughs got that treatment. And how can we make sure that the right ones going to be be made get that treatment.
What kind of things are you thinking about? Any examples?
You can, hypothetically, build some pretty different interacting systems of ML programs inside the VM I’ve been building, that has not gotten a lot of interest. I’ve been thinking about it a fair bit recently.
But I think the general case still stands. How would someone who has made an AGI breakthrough convince the AGI risk community with out building it?
In the usual way someone who has made a breakthrough convinces others. Reputation helps. Whitepapers help. Toy examples help. Etc., etc.
I don’t understand the context, however. That someone, how does he know it’s a breakthrough without testing it out? And why would he be so concerned with the opinion of the AI risk community (which isn’t exactly held in high regard by most working AI researchers)?
Okay. A good metaphor might be the development of the atomic bomb. Lots of nuclear physicists thought that nuclear reactions couldn’t be used for useful energy (e.g. Rutherford). Leo Szilard had the insight that you could do a chain reaction and that this might be dangerous. He did not build it the bomb (he could not, he didn’t know about neutrons) and assigned the patent to the admiralty to keep it secret.
But he managed to convince other high profile physicists that it might dangerous without publicizing it too much (no whitepapers etc). He had the reputation etc and the physics of these things was far more firm than our whispy grasp of intelligence.
So that worked.
But how will it work for our hypothetical AI researcher who has the breakthrough, if they are not part of the in group of ai risk people? They might be chinese and not have a good grasp on english. They are motivated to try and get the word to say Elon Musk (or another influential concerned person/group that might be able to develop it safely) of their breakthrough but want to keep the idea as secret as possible and do not have the pathway of reaching them.
One issue is that you’re judging the idea of a chain reaction as a breakthrough post factum. At the time, it was just a hypothesis, interesting but unproven. I don’t know the history of nuclear physics well enough, but I suspect there were other hypotheses, also quite interesting, which didn’t pan out and we forgot about them.
A breakthrough idea is by definition weird and doesn’t fit into the current paradigm. At the time it’s proposed, it is difficult to separate real breakthroughs from unworkable craziness unless you can demonstrate that your breakthrough idea actually works in reality. And if you can’t—well, absent a robust theoretical proof, you will just have to be very convincing: we’re back to the usual methods mentioned above (reputation, etc.).
Claimed breakthroughs sometimes are real and sometimes are not (e.g. cold fusion). I suspect the base rates will create a prior not favourable to accepting a breakthrough as real.
It was interesting enough that a letter got sent to the president by Einstein about it which was taken seriously, before it was made. I recommend reading up about it, it is a very interesting time in history,
It would be interesting to know how many other potential breakthroughs got that treatment. And how can we make sure that the right ones going to be be made get that treatment.