Thank you Gerald Monroe for answering my question,
I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom.
So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent “blackboxiness” have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
Well for one thing, because you don’t have a choice in a competitive environment. Any software/hardware company in the Bay Area that doesn’t adopt AI (at least to reduce developer cost) will go broke. At a national scale, any power bloc that doesn’t develop weapons using AI will be invaded and their governments deposed. And it has historically not been effective to try to negotiate agreements not to build advanced weapons. It frankly doesn’t appear to have ever successfully happened.
Page has been edited but a summary of the outcome is:
The United States developed better technology to get better performance from their ships while still working within the weight limits, the United Kingdom exploited a loop-hole in the terms, the Italians misrepresented the weight of their vessels, and when up against the limits, Japan left the treaty. The nations which violated the terms of the treaty did not suffer great consequences for their actions. Within little more than a decade, the treaty was abandoned.
Later arms control agreements such as SALT leave large enough nuclear arsenals to effectively still be MAD (~4000 nuclear warheads on each side). And agreements on chemical and biological weapons were violated openly and privately until the superpowers determined, after decades of R&D, that they weren’t worth the cost.
Thank you Gerald Monroe for explaining you thoughts further,
And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.
Thank you Gerald Monroe for answering my question,
I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom.
So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent “blackboxiness” have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard “move fast, break things” silicon valley attitude.
Well for one thing, because you don’t have a choice in a competitive environment. Any software/hardware company in the Bay Area that doesn’t adopt AI (at least to reduce developer cost) will go broke. At a national scale, any power bloc that doesn’t develop weapons using AI will be invaded and their governments deposed. And it has historically not been effective to try to negotiate agreements not to build advanced weapons. It frankly doesn’t appear to have ever successfully happened.
See here: https://en.wikipedia.org/wiki/Washington_Naval_Treaty
Page has been edited but a summary of the outcome is:
Later arms control agreements such as SALT leave large enough nuclear arsenals to effectively still be MAD (~4000 nuclear warheads on each side). And agreements on chemical and biological weapons were violated openly and privately until the superpowers determined, after decades of R&D, that they weren’t worth the cost.
Thank you Gerald Monroe for explaining you thoughts further,
And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.