I am not sure that a theory of mind is needed here. If one were to treat humans as a natural phenomenon, living, like the tuberculosis bacillus, or non-living like ice spreading over a lake in freezing temperatures, then the overt behavioral aspects is all that is needed to detect a threat to be eliminated. And then it’s trivial to find a means to dispose of the threat, humans are fragile and stupid and have created a lot of ready means of mass destruction.
Human behavior is much more complex that ice spreading over a lake. So it’s actually simplifying the situation to think in terms of “agents that have goals – what do I predict they want?”, in a way that it wouldn’t be for ice.
Every behavior is complex when you look into the details. But the general patterns are often quite simple. And humans are no exception. They expand and take over, easy to predict. Sometimes the expansion stalls for a time, but then resumes. What do you think is so different in overall human patterns from the natural phenomena?
Not sure what you are asking and how it is relevant to the general patterns that could trigger an adverse AI response. Also, how much of your stance is triggered by the “humans are special” belief?
And then it’s trivial to find a means to dispose of the threat, humans are fragile and stupid and have created a lot of ready means of mass destruction.
If by “a lot of ready means of mass destruction” you’re thinking of nukes, it doesn’t seem trivial to design a way to use nukes to destroy / neutralize all humans without jeopardizing the AGI’s own survival.
We don’t have a way of reliably modeling the results of very many simultaneous nuclear blasts, and it seems like the AGI probably wouldn’t have a way to reliable model this either unless it ran more empirical tests (which would be easy to notice).
It seems like an AGI wouldn’t execute a “kill all humans” plan unless it was confident that executing the plan would in expectation result in a higher chance of its own survival than not executing the plan. I don’t see how an AGI could become confident about high-variance “kill all humans” plans like using nukes without having much better predictive models than we do. (And it seems like more empirical data about what multiple simultaneous nuclear explosions do would be required to have better models for this case.)
Humans are trivial to kill. Physically, chemically, biologically or psychologically. And a combination of those would be even more effective in collapsing the human population. I will not go here into the details, to avoid arguments and negative attention. And if your argument is that humans are tough to kill, then look into the historic data of population collapse, and that was without any adversarial pressure. Or with, if you consider the indigenous population of the American continent.
I am not sure that a theory of mind is needed here. If one were to treat humans as a natural phenomenon, living, like the tuberculosis bacillus, or non-living like ice spreading over a lake in freezing temperatures, then the overt behavioral aspects is all that is needed to detect a threat to be eliminated. And then it’s trivial to find a means to dispose of the threat, humans are fragile and stupid and have created a lot of ready means of mass destruction.
Human behavior is much more complex that ice spreading over a lake. So it’s actually simplifying the situation to think in terms of “agents that have goals – what do I predict they want?”, in a way that it wouldn’t be for ice.
Every behavior is complex when you look into the details. But the general patterns are often quite simple. And humans are no exception. They expand and take over, easy to predict. Sometimes the expansion stalls for a time, but then resumes. What do you think is so different in overall human patterns from the natural phenomena?
If that’s true, why do you think humans have theory of mind?
Not sure what you are asking and how it is relevant to the general patterns that could trigger an adverse AI response. Also, how much of your stance is triggered by the “humans are special” belief?
Ah, yeah I just re-read the opening thread and then re-read your comment think I just agree.
If by “a lot of ready means of mass destruction” you’re thinking of nukes, it doesn’t seem trivial to design a way to use nukes to destroy / neutralize all humans without jeopardizing the AGI’s own survival.
We don’t have a way of reliably modeling the results of very many simultaneous nuclear blasts, and it seems like the AGI probably wouldn’t have a way to reliable model this either unless it ran more empirical tests (which would be easy to notice).
It seems like an AGI wouldn’t execute a “kill all humans” plan unless it was confident that executing the plan would in expectation result in a higher chance of its own survival than not executing the plan. I don’t see how an AGI could become confident about high-variance “kill all humans” plans like using nukes without having much better predictive models than we do. (And it seems like more empirical data about what multiple simultaneous nuclear explosions do would be required to have better models for this case.)
Humans are trivial to kill. Physically, chemically, biologically or psychologically. And a combination of those would be even more effective in collapsing the human population. I will not go here into the details, to avoid arguments and negative attention. And if your argument is that humans are tough to kill, then look into the historic data of population collapse, and that was without any adversarial pressure. Or with, if you consider the indigenous population of the American continent.