If you have a friendly (to you) AGI and they don’t then you win. That’s how it works.
Implicit premise here that AGI will be as powerful as is often suggested here. Not everyone thinks that is likely. And there are some forms of AGI that are much less likely to be that useful without some time. The most obvious example would be if the first AGIs are uploads. Modifying uploads could be very difficult given the non-modular, very tangled nature of the human brain. In time, uploads would still likely be very helpful (especially if Moore’s Law continues to hold so that uploads become cheap and fast).
Another possible problem is that the technology may just not be easily improvable. We do see diminishing marginal gains in many technologies today even when people are working very hard on them. It isn’t clear that nanobots or narrowly tailored viruses are even possible (although I agree that they are certainly plausible). As to laser defenses- we’ve spent thirty years researching that and it seems to mainly have just given us a good understanding of why that’s really, really tough. The upshot is that AGI under your control is not an automatic win button.
there are some forms of AGI that are much less likely to be that useful without some time. The most obvious example would be if the first AGIs are uploads. Modifying uploads could be very difficult given the non-modular, very tangled nature of the human brain.
Uploads are terribly unlikely to beat engineered machine intelligence. Just think of the technology involved. I know some people argue this point, but essentially, their arguments look pretty baseless to me. Uploads coming first is a silly idea, IMO.
The US Navy is starting to use lasers as defensive weapons on some of their ships.
Lasers are also capable of shooting down missiles. A major problem with doing that was agreements not to weaponize space. However, now there isn’t the cold war threatening nuclear war so there isn’t the political will to implement those strategies.
Implicit premise here that AGI will be as powerful as is often suggested here. Not everyone thinks that is likely. And there are some forms of AGI that are much less likely to be that useful without some time. The most obvious example would be if the first AGIs are uploads. Modifying uploads could be very difficult given the non-modular, very tangled nature of the human brain. In time, uploads would still likely be very helpful (especially if Moore’s Law continues to hold so that uploads become cheap and fast).
Another possible problem is that the technology may just not be easily improvable. We do see diminishing marginal gains in many technologies today even when people are working very hard on them. It isn’t clear that nanobots or narrowly tailored viruses are even possible (although I agree that they are certainly plausible). As to laser defenses- we’ve spent thirty years researching that and it seems to mainly have just given us a good understanding of why that’s really, really tough. The upshot is that AGI under your control is not an automatic win button.
Uploads are terribly unlikely to beat engineered machine intelligence. Just think of the technology involved. I know some people argue this point, but essentially, their arguments look pretty baseless to me. Uploads coming first is a silly idea, IMO.
The US Navy is starting to use lasers as defensive weapons on some of their ships.
Lasers are also capable of shooting down missiles. A major problem with doing that was agreements not to weaponize space. However, now there isn’t the cold war threatening nuclear war so there isn’t the political will to implement those strategies.
Yes.
No.