Over what time window does your assessed risk apply. eg 100years, 1000? Does the danger increase or decrease with time?
I have deep concern that most people have a mindset warped by human pro-social instincts/biases. Evolution has long rewarded humans for altruism, trust and cooperation, women in particular have evolutionary pressures to be open and welcoming to strangers to aid in surviving conflict and other social mishaps, men somewhat the opposite [See eg “Our Kind” a mass market anthropological survey of human culture and psychology] . Which of course colors how we view things deeply.
But to my view evolution strongly favours Vernor Vinge’s “Aggressively hegemonizing” AI swarms [“A fire upon the deep”]. If AIs have agency, freedom to pick their own goals, and ability to self replicate or grow, then those that choose rapid expansion as a side-effect of any pretext ‘win’ in evolutionary terms. This seems basically inevitable to me over long term. Perhaps we can get some insurance by learning to live in space. But at a basic level it seems to me that there is a very high probability that AI wipes out humans over the longer term based on this very simple evolutionary argument, even if initial alignment is good.
Except the point of Yudkowsky’s “friendly AI” is that they don’t have freedom to pick their own goals, they have the goals we set to them, and they are (supposedly) safe in a sense that “wiping out humanity” is not something we want, therefore it’s not something an aligned AI would want. We don’t replicate evolution with AIs, we replicate careful design and engineering that humans have used for literally everything else. If there is only a handful of powerful AIs with careful restrictions on what their goals can be (something we don’t know how to do yet), then your scenario won’t happen
My thoughts run along similar lines. Unless we can guarantee the capabilities of AI will be drastically and permanently curtailed, not just in quantity but also in kind (no ability to interact with the internet or the physical world, no ability to develop intent)c then the inevitability of something going wrong implies that we must all be Butlerian Jihadists if we care for biological life to continue.
But biological life is doomed to cease rapidly anyways. Replacement with new creatures and humans is still mass extinction of the present. The fact you have been socially conditioned to ignore this doesn’t change reality.
The futures where :
(Every living human and animal today is dead, new animals and humans replace)
And (Every living human and animal today is dead, new artificial beings replace)
Are the same future for anyone alive now. Arguably the artificial one is the better future because no new beings will necessarily die until the heat death. AI systems all start immortal as an inherent property.
It’s arguable from a negative utilitarian maladaptive point of view, sure. I find the argument wholly unconvincing.
How we get to our deaths matters, whether we have the ability to live our lives in a way we find fulfilling matters, and the continuation of our species matters. All are threatened by AGI.
Over what time window does your assessed risk apply. eg 100years, 1000? Does the danger increase or decrease with time?
I have deep concern that most people have a mindset warped by human pro-social instincts/biases. Evolution has long rewarded humans for altruism, trust and cooperation, women in particular have evolutionary pressures to be open and welcoming to strangers to aid in surviving conflict and other social mishaps, men somewhat the opposite [See eg “Our Kind” a mass market anthropological survey of human culture and psychology] . Which of course colors how we view things deeply.
But to my view evolution strongly favours Vernor Vinge’s “Aggressively hegemonizing” AI swarms [“A fire upon the deep”]. If AIs have agency, freedom to pick their own goals, and ability to self replicate or grow, then those that choose rapid expansion as a side-effect of any pretext ‘win’ in evolutionary terms. This seems basically inevitable to me over long term. Perhaps we can get some insurance by learning to live in space. But at a basic level it seems to me that there is a very high probability that AI wipes out humans over the longer term based on this very simple evolutionary argument, even if initial alignment is good.
Except the point of Yudkowsky’s “friendly AI” is that they don’t have freedom to pick their own goals, they have the goals we set to them, and they are (supposedly) safe in a sense that “wiping out humanity” is not something we want, therefore it’s not something an aligned AI would want. We don’t replicate evolution with AIs, we replicate careful design and engineering that humans have used for literally everything else. If there is only a handful of powerful AIs with careful restrictions on what their goals can be (something we don’t know how to do yet), then your scenario won’t happen
My thoughts run along similar lines. Unless we can guarantee the capabilities of AI will be drastically and permanently curtailed, not just in quantity but also in kind (no ability to interact with the internet or the physical world, no ability to develop intent)c then the inevitability of something going wrong implies that we must all be Butlerian Jihadists if we care for biological life to continue.
But biological life is doomed to cease rapidly anyways. Replacement with new creatures and humans is still mass extinction of the present. The fact you have been socially conditioned to ignore this doesn’t change reality.
The futures where :
(Every living human and animal today is dead, new animals and humans replace)
And (Every living human and animal today is dead, new artificial beings replace)
Are the same future for anyone alive now. Arguably the artificial one is the better future because no new beings will necessarily die until the heat death. AI systems all start immortal as an inherent property.
It’s arguable from a negative utilitarian maladaptive point of view, sure. I find the argument wholly unconvincing.
How we get to our deaths matters, whether we have the ability to live our lives in a way we find fulfilling matters, and the continuation of our species matters. All are threatened by AGI.