That’s the critical mistake. AIs don’t turn evil. If they could, we would have FAI half-solved.
AIs deviate from their intended programming, in ways that are dangerous for humans. And it’s not thousands of years away, it’s away just as much as a self-driving car crashing into a group of people to avoid a dog crossing the street.
Even your clarification seems too anthromorphic to me.
AIs don’t turn evil, but I don’t think they deviate from their programming either. Their programming deviates from their programmers values. (Or, another possibility, their programmer’s values deviate from humanity’s values).
AIs don’t turn evil, but I don’t think they deviate from their programming either.
They do, if they are self-improving, although I imagine you could collapse “programming” and “meta-programming”, in which case an AI would just only partially deviate. The point is you couldn’t expect things turn out to be so simple when talking about a runaway AI.
AIs deviate from their intended programming, in ways that are dangerous for humans. And it’s not thousands of years away, it’s away just as much as a self-driving car crashing into a group of people to avoid a dog crossing the street.
But that’s a very different kind of issue than AI taking over the world and killing or enslaving all humans.
EDIT:
To expand: all technologies introduce safety issues. Once we got fire some people got burnt. This doesn’t imply that UFFire (Unfriendly Fire) is the most pressing existential risk for humanity and we must devote huge amount of resources to prevent it and never use fire until we have proved that it will not turn “unfriendly”.
Well, there’s a phoenomenon called “flash over”, that realizes in a confined environment, and happens when the temperature of a fire becomes so high that all the substances within starts to burn and feed the reaction.
Now, imagine that the whole world could become a closed environment for the flashover...
However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.
In fact, in such a scenario, we should dedicate a huge amount of resources to prevent it and never use fire until we have proved it will not turn “unfriendly”.
However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.
Do you realize this is a totally hypothetical scenario?
That’s the critical mistake. AIs don’t turn evil. If they could, we would have FAI half-solved.
AIs deviate from their intended programming, in ways that are dangerous for humans. And it’s not thousands of years away, it’s away just as much as a self-driving car crashing into a group of people to avoid a dog crossing the street.
Even your clarification seems too anthromorphic to me.
AIs don’t turn evil, but I don’t think they deviate from their programming either. Their programming deviates from their programmers values. (Or, another possibility, their programmer’s values deviate from humanity’s values).
Programming != intended programming.
They do, if they are self-improving, although I imagine you could collapse “programming” and “meta-programming”, in which case an AI would just only partially deviate. The point is you couldn’t expect things turn out to be so simple when talking about a runaway AI.
But that’s a very different kind of issue than AI taking over the world and killing or enslaving all humans.
EDIT:
To expand: all technologies introduce safety issues.
Once we got fire some people got burnt. This doesn’t imply that UFFire (Unfriendly Fire) is the most pressing existential risk for humanity and we must devote huge amount of resources to prevent it and never use fire until we have proved that it will not turn “unfriendly”.
Well, there’s a phoenomenon called “flash over”, that realizes in a confined environment, and happens when the temperature of a fire becomes so high that all the substances within starts to burn and feed the reaction.
Now, imagine that the whole world could become a closed environment for the flashover...
So we should stop using fire until we prove that the world will not burst into flames?
However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.
In fact, in such a scenario, we should dedicate a huge amount of resources to prevent it and never use fire until we have proved it will not turn “unfriendly”.
Do you realize this is a totally hypothetical scenario?