The U.S. has many more and smarter people than the Taliban. The bottom line is that the U.S. devotes a lot more output per man-hour to defeat a completely inferior enemy. Yet they are losing.
Bad analogy. In this case the Taliban has a large set of natural advantages, the US has strong moral constraints and goal constraints (simply carpet bombing the entire country isn’t an option for example).
I thought it was a good analogy because you have to take into account that an AGI is initially going to be severely constrained due to its fragility and the necessity to please humans.
It shows that a lot of resources, intelligence and speed does not provide a significant advantage in dealing with large-scale real-world problems involving humans.
Especially given that human power games are often irrational.
So? As long as they follow minimally predictable patterns it should be ok.
Well, the problem is that smarts needed for things like the AI box experiment won’t help you much. Because convincing average Joe won’t work by making up highly complicated acausal trade scenarios. Average Joe is highly unpredictable.
The point is that it is incredible difficult to reliably control humans, even for humans who have been fine-tuned to do so by evolution.
The Taliban analogy also works the other way (which I invoked earlier up in this thread). It shows that a small group with modest resources can still inflict disproportionate large scale damage.
The point is that it is incredible difficult to reliably control humans, even for humans who have been fine-tuned to do so by evolution.
There’s some wiggle room in ‘reliably control’, but plain old money goes pretty far. An AI group only needs a certain amount of initial help from human infrastructure, namely to the point where it can develop reasonably self-sufficient foundries/data centers/colonies. The interactions could be entirely cooperative or benevolent up until some later turning point. The scenario from the Animatrix comes to mind.
I thought it was a good analogy because you have to take into account that an AGI is initially going to be severely constrained due to its fragility and the necessity to please humans.
It shows that a lot of resources, intelligence and speed does not provide a significant advantage in dealing with large-scale real-world problems involving humans.
Well, the problem is that smarts needed for things like the AI box experiment won’t help you much. Because convincing average Joe won’t work by making up highly complicated acausal trade scenarios. Average Joe is highly unpredictable.
The point is that it is incredible difficult to reliably control humans, even for humans who have been fine-tuned to do so by evolution.
The Taliban analogy also works the other way (which I invoked earlier up in this thread). It shows that a small group with modest resources can still inflict disproportionate large scale damage.
There’s some wiggle room in ‘reliably control’, but plain old money goes pretty far. An AI group only needs a certain amount of initial help from human infrastructure, namely to the point where it can develop reasonably self-sufficient foundries/data centers/colonies. The interactions could be entirely cooperative or benevolent up until some later turning point. The scenario from the Animatrix comes to mind.
That’s fiction.