I think the relation of information to power, historically, will be too complicated and too intertwined with other variables for you to discover. Even if you count those other variables as part of the starting conditions technology has a tendency to make different aspects of the starting conditions salient such that you can’t evaluate the effect starting conditions have on power until you know the path of technology. Moreover, power is a function of memetic technologies as much as physical technologies and the former will be even more difficult to quantify. You’d be better off making the function of information to power a variable in your models.
But if you persist you should keep in mind the distinction between offense-dominant systems and defense dominant systems. Offense-dominant systems occur when technology is good at invading/destroying enemies but bad at protecting you from them. Defense-dominant is the reverse. Knowing whether a technology is defensive or offensive is crucial for understanding its impact. Some of history’s biggest military blunders occurred when one side misapprehended their system. For example, in WWI the Germans thought they were still in a offense-dominant system so they thought they executed the Schlieffen Plan and tried to beat France quickly and then shift their forces east to avoid a two-front war. But the invention of the machine gun/trench war technology meant that it was much, much harder to take territory than anyone had expected and so the Germans couldn’t get take France fast enough (there were other reasons). Germany wasn’t alone in making this mistake, lots of countries thought the war would finish quickly. Then in WWII much of Europe was still thinking in terms of the defense-dominant system of WWI and so were shocked at the speed of German progress… but such progress was inevitable given advances in tank and aircraft that rendered trench warfare tactics useless.
The more agents perceive that their system is offense-dominant the more unstable the system is since agents estimate the benefits of doing well in a conflict, and the costs of doing poorly, to be high. Mutual second-strike nuclear capabilities is actually an extremely stable system for this reason. And mutual first-strike capabilities is about as bad as it gets. Anyway, it seems this distinction would be important for any modeling of power.
Interesting point about offensive/defensive power.
You’d be better off making the function of information to power a variable in your models.
Given an amount of information, I need to compute a corresponding amount of power. “Make it a variable” doesn’t help. It’s already a variable. I have too many variables. That’s why I want to make it a function.
Right, I understand why you want to make it a function. But as I see it your practical options are, 1) make a gross generalization from insufficient data that might have no relation to the function of information to power in the future and hope that you get close enough to reality that your model has at least some accuracy in its predictions OR 2) come up with 4-5 plausible but as-different-as-possible functions relating information to power and model with each of them. The result 4-5 times more predictions to sort through and instead of conclusions like “Starting conditions x,y,z, lead to a singleton” you’ll get conclusions like “Starting conditions x,y,z, given assumptions a,b,c about the relation of information to power, lead to a singleton.” The second option is harder and less conclusive. But it is also less wrong.
One more thing about the offense/defense distinction. One implication of the theory is that technological advancement can actually undermine an agents position in a multi-polar system. If Agent A develops an offensive weapon that guarantees victory if Agent A strikes first then other agents are basically forced to attack preemptively and likely gang up on Agent A. Given this particular input of more information, the function of information to power seems to output less power.
instead of conclusions like “Starting conditions x,y,z, lead to a singleton” you’ll get conclusions like “Starting conditions x,y,z, given assumptions a,b,c about the relation of information to power, lead to a singleton.” The second option is harder and less conclusive. But it is also less wrong.
Okay; good point. I would still want to gather the data, though, to compare to the model results.
One implication of the theory is that technological advancement can actually undermine an agents position in a multi-polar system. If Agent A develops an offensive weapon that guarantees victory if Agent A strikes first then other agents are basically forced to attack preemptively and likely gang up on Agent A.
I think the relation of information to power, historically, will be too complicated and too intertwined with other variables for you to discover. Even if you count those other variables as part of the starting conditions technology has a tendency to make different aspects of the starting conditions salient such that you can’t evaluate the effect starting conditions have on power until you know the path of technology. Moreover, power is a function of memetic technologies as much as physical technologies and the former will be even more difficult to quantify. You’d be better off making the function of information to power a variable in your models.
But if you persist you should keep in mind the distinction between offense-dominant systems and defense dominant systems. Offense-dominant systems occur when technology is good at invading/destroying enemies but bad at protecting you from them. Defense-dominant is the reverse. Knowing whether a technology is defensive or offensive is crucial for understanding its impact. Some of history’s biggest military blunders occurred when one side misapprehended their system. For example, in WWI the Germans thought they were still in a offense-dominant system so they thought they executed the Schlieffen Plan and tried to beat France quickly and then shift their forces east to avoid a two-front war. But the invention of the machine gun/trench war technology meant that it was much, much harder to take territory than anyone had expected and so the Germans couldn’t get take France fast enough (there were other reasons). Germany wasn’t alone in making this mistake, lots of countries thought the war would finish quickly. Then in WWII much of Europe was still thinking in terms of the defense-dominant system of WWI and so were shocked at the speed of German progress… but such progress was inevitable given advances in tank and aircraft that rendered trench warfare tactics useless.
The more agents perceive that their system is offense-dominant the more unstable the system is since agents estimate the benefits of doing well in a conflict, and the costs of doing poorly, to be high. Mutual second-strike nuclear capabilities is actually an extremely stable system for this reason. And mutual first-strike capabilities is about as bad as it gets. Anyway, it seems this distinction would be important for any modeling of power.
Interesting point about offensive/defensive power.
Given an amount of information, I need to compute a corresponding amount of power. “Make it a variable” doesn’t help. It’s already a variable. I have too many variables. That’s why I want to make it a function.
Right, I understand why you want to make it a function. But as I see it your practical options are, 1) make a gross generalization from insufficient data that might have no relation to the function of information to power in the future and hope that you get close enough to reality that your model has at least some accuracy in its predictions OR 2) come up with 4-5 plausible but as-different-as-possible functions relating information to power and model with each of them. The result 4-5 times more predictions to sort through and instead of conclusions like “Starting conditions x,y,z, lead to a singleton” you’ll get conclusions like “Starting conditions x,y,z, given assumptions a,b,c about the relation of information to power, lead to a singleton.” The second option is harder and less conclusive. But it is also less wrong.
One more thing about the offense/defense distinction. One implication of the theory is that technological advancement can actually undermine an agents position in a multi-polar system. If Agent A develops an offensive weapon that guarantees victory if Agent A strikes first then other agents are basically forced to attack preemptively and likely gang up on Agent A. Given this particular input of more information, the function of information to power seems to output less power.
Okay; good point. I would still want to gather the data, though, to compare to the model results.
Tell that to the Iranians.