If AI is un-alignable (or at least significantly easier to create than to prevent dis-alignment), the point of no return was 1837. If Babbage had kept his mouth shut, maybe we could have avoided this path.
But really, it’s a mistake to think of it as a single point in time. There’s a slew of contributing factors, happening over a long time period. It’s somewhat similar to recent discussions about human revolutions (https://www.lesswrong.com/posts/osYFcQtxnRKB4F4HA/a-tale-from-communist-china and others). It happens slowly, then quickly, and the possible interventions are very unclear at any point.
career plans which take a long time to pay off are a bad idea, because by the time you reap the benefits of the plans it may already be too late
This is true, even if AI takeover never happens. The environment changes significantly over a human lifetime, and the only reasonable strategy is to thread a path that has BOTH long-term impact (to the extent that you can predict anything) AND short-term satisfaction. “Find a job you enjoy doing, and you will never have to work a day in your life.” remains solid advice, regardless of reasons for uncertainty.
What does “inherently a weapons technology” mean? Given some technology, how does one determine whether or not it is “inherently a weapons technology”?
I ask because it seems to me that AI is clearly not “inherently a weapons technology” as I would use those words, and I suspect you mean something different by them.
Regardless, any generalization of AI that includes (e.g.) pointed sticks and flint arrowheads is surely too broad for present purposes; even if “how do we stop humans screwing everything up with whatever tools they have available?” is a more important question than “how do we stop AIs screwing up in ways that their makers and owners would be horrified by?”, it’s a different question, with (probably) different answers, and the latter is the subject here.
I don’t agree with your answer to your rhetorical question. A kitchen knife can cause injury and death pretty easily, but while it can be a weapon I wouldn’t say that kitchen knives are “inherently a weapons technology”. A brick can cause injury and death pretty easily too, and bricks are certainly not “inherently a weapons technology”.
I would only say that something is “inherently a weapons technology” if (1) a major motivation for its development is (broadly speaking) military and/or (2) what it’s best at is causing injury, destruction and death.
Military organizations have put quite a lot of effort into AI, but so have plenty of non-military organizations and it looks to me as if the latter have had much more (visible) success than the former. And so far, the things AI has proven most useful for are things like distinguishing cats from dogs, translating text, and beating humans at board games. Those (or things like them) may well have military applications, but they aren’t weapons. (Not even when applied militarily. A better way of spotting enemy tanks makes your weapons more effective, but it isn’t itself a weapon.)
Both you and Dagon can point your fingers wherever you like. The more interesting question is where it’s useful to point your fingers.
If AI is un-alignable (or at least significantly easier to create than to prevent dis-alignment), the point of no return was 1837. If Babbage had kept his mouth shut, maybe we could have avoided this path.
But really, it’s a mistake to think of it as a single point in time. There’s a slew of contributing factors, happening over a long time period. It’s somewhat similar to recent discussions about human revolutions (https://www.lesswrong.com/posts/osYFcQtxnRKB4F4HA/a-tale-from-communist-china and others). It happens slowly, then quickly, and the possible interventions are very unclear at any point.
This is true, even if AI takeover never happens. The environment changes significantly over a human lifetime, and the only reasonable strategy is to thread a path that has BOTH long-term impact (to the extent that you can predict anything) AND short-term satisfaction. “Find a job you enjoy doing, and you will never have to work a day in your life.” remains solid advice, regardless of reasons for uncertainty.
-
What does “inherently a weapons technology” mean? Given some technology, how does one determine whether or not it is “inherently a weapons technology”?
I ask because it seems to me that AI is clearly not “inherently a weapons technology” as I would use those words, and I suspect you mean something different by them.
Regardless, any generalization of AI that includes (e.g.) pointed sticks and flint arrowheads is surely too broad for present purposes; even if “how do we stop humans screwing everything up with whatever tools they have available?” is a more important question than “how do we stop AIs screwing up in ways that their makers and owners would be horrified by?”, it’s a different question, with (probably) different answers, and the latter is the subject here.
-
I don’t agree with your answer to your rhetorical question. A kitchen knife can cause injury and death pretty easily, but while it can be a weapon I wouldn’t say that kitchen knives are “inherently a weapons technology”. A brick can cause injury and death pretty easily too, and bricks are certainly not “inherently a weapons technology”.
I would only say that something is “inherently a weapons technology” if (1) a major motivation for its development is (broadly speaking) military and/or (2) what it’s best at is causing injury, destruction and death.
Military organizations have put quite a lot of effort into AI, but so have plenty of non-military organizations and it looks to me as if the latter have had much more (visible) success than the former. And so far, the things AI has proven most useful for are things like distinguishing cats from dogs, translating text, and beating humans at board games. Those (or things like them) may well have military applications, but they aren’t weapons. (Not even when applied militarily. A better way of spotting enemy tanks makes your weapons more effective, but it isn’t itself a weapon.)
Both you and Dagon can point your fingers wherever you like. The more interesting question is where it’s useful to point your fingers.