Complexity science, and especially the dream to deeply understand social systems and build concrete predictive models (such as Asimov’s “psychohistory”), raise some slippery ethical questions. If we do develop this area of science and technology, allowing us to predict, and consequently also to purposefully leverage and manipulate our society, who gets to be at the controls? Democracy can no longer work, even conceptually, because mass opinions become merely a reflection of these control algorithms. Though individual and oligarchical leadership becomes similarly subject to such controls — though perhaps with larger uncertainty, being less predictable as per law of large numbers. Whoever is at the controls is also being controlled by the very same system — creating a feedback loop, which is not so different from how our society is already functioning.
Let’s unpack this a bit. First let’s admit that we already are, and always have been, strongly influenced by external factors. Our opinions are some fusion of the cultural trends, mass-media influencers, our family upbringing, all filtered through some inherent (genetic?) predisposition to care more about some issues than others. If we really try to dig deep, it becomes quite hard to find anything that can really be identified as “my true individual opinion.” This is similarly true of the president’s opinions, as well as that of any dictator. In this sense, there can never be some “external independent free will that controls or influences the society” — such a will is always a product of that same society, thus somehow reflecting its needs and values.
One could argue that historically, this process of formation of individuals’ opinions, and the subsequence exercise of the formed will, happens somehow “naturally” and is not consciously manipulated by anyone. Developing a predictive science that allows to engineer social behaviors would force us to make conscious choices as to how we use it. This basically amounts to someone having to take responsibility for such choices, which are presently left up to chance. And of course, with responsibility come all sorts of moral and ethical questions of how such choices “should be” made (cf. “with great power comes great responsibility”).
One specific point that bothers me is that all such reasoning comes from within our current paradigm of viewing the world: that of competition for scarce resources. From such perspective, this technology is indeed seen as a powerful tool to manipulate societies for personal gain, and a weapon to compete with other rulers. This view is not surprising, as manipulating each other is deeply embedded in our daily practice. Indeed, even when we smile at strangers on the street, most times we do it with the intention to manipulate them into liking us, in the hope of safeguarding us from possible aggression.
However, the development of a scientific understanding of social behaviors and consequences may start to shift this deeply ingrained paradigm itself. We may realize, with precise mathematical certainty, that the world is not a zero-sum game: rather than competing for a scarce resource, we can cooperate to create more of it. This way, using such technology of social engineering for maximizing personal gains may naturally lead to strategies that contribute to the common good as a side-effect. We could thus see the competition of any kind becoming fundamentally and objectively counter-productive, both for individuals and for nations.
Just as our modern social norms teach us to maximize personal gains, the ability to manipulate these norms may force us to ask what it is that we really want to “maximize.” Lacking an externally-imposed objective, we would have to face the hard problem of identifying our deeper personal motives to be alive. Thus, curiously, this power for control may guide us away from trying to control each other entirely—showing us that the true challenge lies in identifying meaningful control objectives. What would you want, if you could have anything? Facing up to that challenge, we may discover that it is honesty and transparency, rather than manipulation, that are the only tools effective at penetrating our own deepest desires.
Is social theory our doom?
Complexity science, and especially the dream to deeply understand social systems and build concrete predictive models (such as Asimov’s “psychohistory”), raise some slippery ethical questions. If we do develop this area of science and technology, allowing us to predict, and consequently also to purposefully leverage and manipulate our society, who gets to be at the controls? Democracy can no longer work, even conceptually, because mass opinions become merely a reflection of these control algorithms. Though individual and oligarchical leadership becomes similarly subject to such controls — though perhaps with larger uncertainty, being less predictable as per law of large numbers. Whoever is at the controls is also being controlled by the very same system — creating a feedback loop, which is not so different from how our society is already functioning.
Let’s unpack this a bit. First let’s admit that we already are, and always have been, strongly influenced by external factors. Our opinions are some fusion of the cultural trends, mass-media influencers, our family upbringing, all filtered through some inherent (genetic?) predisposition to care more about some issues than others. If we really try to dig deep, it becomes quite hard to find anything that can really be identified as “my true individual opinion.” This is similarly true of the president’s opinions, as well as that of any dictator. In this sense, there can never be some “external independent free will that controls or influences the society” — such a will is always a product of that same society, thus somehow reflecting its needs and values.
One could argue that historically, this process of formation of individuals’ opinions, and the subsequence exercise of the formed will, happens somehow “naturally” and is not consciously manipulated by anyone. Developing a predictive science that allows to engineer social behaviors would force us to make conscious choices as to how we use it. This basically amounts to someone having to take responsibility for such choices, which are presently left up to chance. And of course, with responsibility come all sorts of moral and ethical questions of how such choices “should be” made (cf. “with great power comes great responsibility”).
One specific point that bothers me is that all such reasoning comes from within our current paradigm of viewing the world: that of competition for scarce resources. From such perspective, this technology is indeed seen as a powerful tool to manipulate societies for personal gain, and a weapon to compete with other rulers. This view is not surprising, as manipulating each other is deeply embedded in our daily practice. Indeed, even when we smile at strangers on the street, most times we do it with the intention to manipulate them into liking us, in the hope of safeguarding us from possible aggression.
However, the development of a scientific understanding of social behaviors and consequences may start to shift this deeply ingrained paradigm itself. We may realize, with precise mathematical certainty, that the world is not a zero-sum game: rather than competing for a scarce resource, we can cooperate to create more of it. This way, using such technology of social engineering for maximizing personal gains may naturally lead to strategies that contribute to the common good as a side-effect. We could thus see the competition of any kind becoming fundamentally and objectively counter-productive, both for individuals and for nations.
Just as our modern social norms teach us to maximize personal gains, the ability to manipulate these norms may force us to ask what it is that we really want to “maximize.” Lacking an externally-imposed objective, we would have to face the hard problem of identifying our deeper personal motives to be alive. Thus, curiously, this power for control may guide us away from trying to control each other entirely—showing us that the true challenge lies in identifying meaningful control objectives. What would you want, if you could have anything? Facing up to that challenge, we may discover that it is honesty and transparency, rather than manipulation, that are the only tools effective at penetrating our own deepest desires.