And the only justification you seem to give for “they’re gonna kill us” is “powers not involved in developing it will be unhappy”.
By “they’re gonna kill us” I assume you mean our potential adversaries. Well, by “powers” I essentially meant other nations, the general public, religious institutions and perhaps even corporations.
You are of course right, when you say that I can’t prove that the public reaction towards AGI development will be highly negative, but I think I did give a sensible justification: Self-Improving AGI has a higher threat-level than nuclear warheads and when people realize this (and I suppose they will in ~30 years), then I confidently predict that their reaction will be highly negative.
I’ll also add that I didn’t pose any specific scenarios like public lynchings. There are other numerous ways to repress and shut down AGI-research and nowhere did I speculate that an angry mob would kill the researchers.
Why not make self-improving AGI research open-source you ask? Essentially for the same reasons why biological weapons don’t get developed in open-source projects. Someone could simply steal the code and release an unsafe AI that may kill us all. (By the way, at the current stage of AGI development an open source project may be a terrific way to move things along, but once things get more sophisticated you can’t put self-improving AGI code “out there” for the whole world to see and modify, that’s just madness.) As far as my opinion of how likely worldwide democratic consensus about developing self-improving AGI goes, I think I made my point and don’t need to elaborate it further.
People were quite enthusiastic about nukes when they were first introduced. It’s all a matter of perception and timing.
nowhere did I speculate that an angry mob would kill the researchers
I know you didn’t, I was speaking figuratively. My bad.
for the same reasons why biological weapons don’t get developed in open-source projects
AFAIK, biological weapons don’t get developed at all, mostly because of how incredibly dangerous and unreliable they are. There’s a lot of international scrutinizing each other and oneself over this. Perhaps the same policy can and should be imposed on AGI?
that’s just madness
Blasphemy! Why would that be so?
I think I made my point
You explained your opinion, but haven’t justified it to my satisfaction. A lot of your argument is implicit, and I suspect that if we made we’d find out it’s based on unwarranted heuristics, i.e. prejudice. Please don’t take this personally: you’re suggesting an important update of my beliefs, and I want to be thorough before adopting it.
By “they’re gonna kill us” I assume you mean our potential adversaries. Well, by “powers” I essentially meant other nations, the general public, religious institutions and perhaps even corporations.
You are of course right, when you say that I can’t prove that the public reaction towards AGI development will be highly negative, but I think I did give a sensible justification: Self-Improving AGI has a higher threat-level than nuclear warheads and when people realize this (and I suppose they will in ~30 years), then I confidently predict that their reaction will be highly negative.
I’ll also add that I didn’t pose any specific scenarios like public lynchings. There are other numerous ways to repress and shut down AGI-research and nowhere did I speculate that an angry mob would kill the researchers.
Why not make self-improving AGI research open-source you ask? Essentially for the same reasons why biological weapons don’t get developed in open-source projects. Someone could simply steal the code and release an unsafe AI that may kill us all. (By the way, at the current stage of AGI development an open source project may be a terrific way to move things along, but once things get more sophisticated you can’t put self-improving AGI code “out there” for the whole world to see and modify, that’s just madness.) As far as my opinion of how likely worldwide democratic consensus about developing self-improving AGI goes, I think I made my point and don’t need to elaborate it further.
People were quite enthusiastic about nukes when they were first introduced. It’s all a matter of perception and timing.
I know you didn’t, I was speaking figuratively. My bad.
AFAIK, biological weapons don’t get developed at all, mostly because of how incredibly dangerous and unreliable they are. There’s a lot of international scrutinizing each other and oneself over this. Perhaps the same policy can and should be imposed on AGI?
Blasphemy! Why would that be so?
You explained your opinion, but haven’t justified it to my satisfaction. A lot of your argument is implicit, and I suspect that if we made we’d find out it’s based on unwarranted heuristics, i.e. prejudice. Please don’t take this personally: you’re suggesting an important update of my beliefs, and I want to be thorough before adopting it.