LARPing the Veil of Ignorance: Someone told me yesterday that there is a group of people role playing a medieval village each summer. They meet for a week, some of them play aristocrats, some of them are artisans, some are peasants. It must suck to be a peasant, I said. The answer was that the roles are chosen by lot. If you are unlucky you become a peasant you are just going to work on a field, but you don’t know that in advance. Which, of course, is the classic Rawls’ “veil of ignorance” thought experiment. And a repeated one at that!
If those people were dedicated to improving the societal system within the game, the thought experiment would become a real experiment. What would that be good for? At the very least it would highlight the shortcomings of the veil of ignorance system—would people game it? And if so, how? But it also may work like a laboratory of governance systems. Whatever emerges in the laboratory can then be tried in a company, an NGO, or, say, a ministry department.
Did they mention any welfare/transfer mechanisms to ensure balance? Like if you drew ‘peasant’ 4 summers in a row (which is not that improbable, having that happen to someone), no one would blame you for just leaving then and there and maybe also quitting the whole thing, but that seems like a bad outcome.
Unfortunately no, they didn’t. But exactly observing this kind of effects would make studying it from the point of view of political science interesting. (See Hirschmanian “exit”).
Honestly if the proportions of those roles were true to real life I would simply never take the lottery, that’s an almost certainty of being a peasant. I guess they still must have made things a bit more friendly.
The Singularity [is] the future point at which artificial intelligence exceeds human intelligence, whereupon immediately thereafter (as the story goes) the machines make themselves rapidly smarter and smarter and smarter, reaching a superhuman level of intelligence that, stuck as we are in the mud of our limited mentation, we can’t fathom.
Stanford Encyclopedia of Philosophy
What’s easy to overlook in the above definition is that in the real world there’s no generic “human intelligence”, just the intelligence of individual human beings.
Not all of us are thus going to experience singularity at the same time. Some of us will have to deal with it sooner, some later.
Technological singularity, in other words, isn’t an objective phenomenon. It’s a subjective thing. In reality, unlike in the simplistic model, it does not resemble the absolute, indisputable physical singularity at the center of a black hole. It is more like black hole’s event horizon, an imaginary border, a point of no return, through witch we pass, one at a time and often not even noticing.
Thinking about it in this way gives the discussion an empirical basis. We could ask: If the singularity is a subjective phenomenon, are there already people who have experienced it? Are there people for whom the world is already too fast-moving and too complex to follow? Are there people, who, stuck in the mud of their limited mentation, as Stanford Encyclopedia mercilessly puts it, can’t fathom what’s going on?
If so, we don’t have to guess how the post-singularity world will look like. We can just ask.
And yes, there are flat-earthers out there and there are conspirational theorists of all flavours, so we definitely have something to work with...
And there seems to be a dilemma here:
Either you believe that the world that is too fast and too complex to follow is still somehow tractable—and it that case you should prove it by taking a flat-earther and helping them to adopt a better model of the world...
Or you believe that changing their mind is impossible and then you have to worry that once you cross the technological event horizon yourself, you will get lost yourself, that you will become just a high-IQ version of a conspiration theorist.
Is your point that the world is already too complicated for all of us to understand, and singularity will only make it more so? Like, quantitatively it can all get a lot weirder, but the qualitative point of “no one really understands what’s going on anymore” has passed long ago? (Or perhaps there never was such moment that people understood how their world works.)
This is related to an idea I keep stressing here, which is that people rarely have consistent meta-level principles. Instead, they’ll endorse the meta-level principle that supports their object-level beliefs at any given moment. The example I keep giving is how when the federal government was anti-gay, conservatives talked about the pressing need for federal intervention and liberals insisted on states’ rights; when the federal government became pro-gay, liberals talked about the pressing need for federal intervention and conservatives insisted on states’ rights.
One encounters that kind of thing all the time, e.g. people trying to change the constitution to cause particular object-level changes.
But on the other hand, it feels like a useful political tool: Whoever is willing to sacrifice their object-level goals can achieve their meta-level goals instead. And given that meta-level changes are likely to have more profound long-term impact, it may be worth it.
Elaborating the above example, if you are anti-gay, but pro-state all you have to do is to wait until pro-gay people support strengthening the states at the expense of the federal government. At that point you can join forces with them and give more power to the states. I’ll hurt your object anti-gay agenda, but you achieve your meta-level agenda which will keep paying off in 20 or 50 years when the gay issue is probably no longer salient enough to care.
If you reframe this as instrumental vs terminal goals, it’s obviously true. If you don’t care about the constitution per se, but only as a means to power and to enabling your policies, and your timeframe is much longer than your opposition, then it’s trivially useful to seek power now and use it over the long term.
But it’s not at all clear that these conditions hold for any humans in the real world. We don’t really have values or goals that are all that well-defined, and we like to think we’re more long-term-oriented than our opposition, but we’re mostly fooling ourselves.
I don’t think this works very well. If you wait until a major party sides with your meta, you could be waiting a long time. (EG, when will 321 voting become a talking point on either side of a presidential election?) And, if you get what you were waiting for, you’re definitely not pulling sideways. That is: you’ll have a tough battle to fight, because there will be a big opposition.
Thanks for the link. I’ve noticed the trend of avoiding the salient issues among those who get things actually done, but I haven’t had a name for it. Pulling the rope sideways—nice.
I don’t think this works very well. If you wait until a major party sides with your meta, you could be waiting a long time.
Correct. This could be countered by having multiple plans and waiting for several possible situations/alliances in parallel.
if you get what you were waiting for, you’re definitely not pulling sideways
Why? It’s known that people care a lot about object-level issues and little about meta-level ones (procedural stuff, e.g. constitution). If you get what you want at the meta level, the voters won’t care and politicians thus have little incentive to make it a partisan/salient issue.
Why? It’s known that people care a lot about object-level issues and little about meta-level ones (procedural stuff, e.g. constitution). If you get what you want at the meta level, the voters won’t care and politicians thus have little incentive to make it a partisan/salient issue.
Your example was waiting until states rights aligned with one side or other of gay rights. So I understood that you were recommending a strategy where you wait until your pet meta issue is polarized. If you don’t do that, what is left of the strategy you were suggesting?
Either the partisan politics sees a connection between state rights and gay rights, in which case you get your allies but you also inherit a bunch of enemies; or partisan politics fails to see the connection, in which case you don’t get a bunch of allies and you also don’t get enemies.
(Not directly relevant, but: my model is that partisan politics filters for issues which are polarizing, but of the available parties voters tend to choose the most centrist. To appeal to a party, an issue should strike at the other side, but to appeal to voters, you’re aiming for broad appeal.)
Let’s say you are an opposition politician and your pet constitutional issue is to replace majority voting by proportional voting. You believe that FPTP has some genuinely detrimental consequences for the society and you are such a selfless person that you are willing to push for the change even against your best object level interests.
The party currently in power loves majority voting. They love it, however, on the object level: It gives them far larger representation in the parliament than would otherwise be reasonable. 55% voters vote for them, yet they get 80% of MPs. They don’t care about meta level and are not willing to sacrifice object-level interests for it.
The situation is stable for the time being. There’s no “political will” to enact proportional voting. So you wait.
At some point the voting patterns change and the ruling party suddenly faces defeat in the upcoming elections. Now they would do better with the proportional voting system.
They care only about the object level, that is, winning the election, and proportional voting is as good means to win as is majority voting.
You, on the other hand, care only about the meta level. You may lose the upcoming election if proportional system is adopted, but you think it’s still worth it.
Suddenly the two parties are aligned, each side prefers the proportional system, albeit for different reasons. The proportional voting gets adopted.
Are there any trade-offs that make you feel moral satisfaction?
Thinking about taboo trade-offs, e.g. the study where people felt outrage at a hospital administrator who decided not to save a life of a kid who needed an expensive surgery, but rather decided to spend the money on running the hospital.
Isn’t it that any trade-off causes at least some un-satisfaction, which then naturally masquerade as moral outrage?
Isn’t it the case that anyone willing to publicly do a trade-off is going to be hit by a wave of moral outrage? On the other hand, someone who’s willing to promise the impossible, that is, who avoids the trade-off, will just make few people slightly annoyed.
LARPing the Veil of Ignorance: Someone told me yesterday that there is a group of people role playing a medieval village each summer. They meet for a week, some of them play aristocrats, some of them are artisans, some are peasants. It must suck to be a peasant, I said. The answer was that the roles are chosen by lot. If you are unlucky you become a peasant you are just going to work on a field, but you don’t know that in advance. Which, of course, is the classic Rawls’ “veil of ignorance” thought experiment. And a repeated one at that!
If those people were dedicated to improving the societal system within the game, the thought experiment would become a real experiment. What would that be good for? At the very least it would highlight the shortcomings of the veil of ignorance system—would people game it? And if so, how? But it also may work like a laboratory of governance systems. Whatever emerges in the laboratory can then be tried in a company, an NGO, or, say, a ministry department.
Did they mention any welfare/transfer mechanisms to ensure balance? Like if you drew ‘peasant’ 4 summers in a row (which is not that improbable, having that happen to someone), no one would blame you for just leaving then and there and maybe also quitting the whole thing, but that seems like a bad outcome.
Unfortunately no, they didn’t. But exactly observing this kind of effects would make studying it from the point of view of political science interesting. (See Hirschmanian “exit”).
Honestly if the proportions of those roles were true to real life I would simply never take the lottery, that’s an almost certainty of being a peasant. I guess they still must have made things a bit more friendly.
One man’s singularity is another man’s Tuesday:
Stanford Encyclopedia of Philosophy
What’s easy to overlook in the above definition is that in the real world there’s no generic “human intelligence”, just the intelligence of individual human beings.
Not all of us are thus going to experience singularity at the same time. Some of us will have to deal with it sooner, some later.
Technological singularity, in other words, isn’t an objective phenomenon. It’s a subjective thing. In reality, unlike in the simplistic model, it does not resemble the absolute, indisputable physical singularity at the center of a black hole. It is more like black hole’s event horizon, an imaginary border, a point of no return, through witch we pass, one at a time and often not even noticing.
Thinking about it in this way gives the discussion an empirical basis. We could ask: If the singularity is a subjective phenomenon, are there already people who have experienced it? Are there people for whom the world is already too fast-moving and too complex to follow? Are there people, who, stuck in the mud of their limited mentation, as Stanford Encyclopedia mercilessly puts it, can’t fathom what’s going on?
If so, we don’t have to guess how the post-singularity world will look like. We can just ask.
And yes, there are flat-earthers out there and there are conspirational theorists of all flavours, so we definitely have something to work with...
And there seems to be a dilemma here:
Either you believe that the world that is too fast and too complex to follow is still somehow tractable—and it that case you should prove it by taking a flat-earther and helping them to adopt a better model of the world...
Or you believe that changing their mind is impossible and then you have to worry that once you cross the technological event horizon yourself, you will get lost yourself, that you will become just a high-IQ version of a conspiration theorist.
Is your point that the world is already too complicated for all of us to understand, and singularity will only make it more so? Like, quantitatively it can all get a lot weirder, but the qualitative point of “no one really understands what’s going on anymore” has passed long ago? (Or perhaps there never was such moment that people understood how their world works.)
--- https://slatestarcodex.com/2016/02/20/writing-advice/
One encounters that kind of thing all the time, e.g. people trying to change the constitution to cause particular object-level changes.
But on the other hand, it feels like a useful political tool: Whoever is willing to sacrifice their object-level goals can achieve their meta-level goals instead. And given that meta-level changes are likely to have more profound long-term impact, it may be worth it.
Elaborating the above example, if you are anti-gay, but pro-state all you have to do is to wait until pro-gay people support strengthening the states at the expense of the federal government. At that point you can join forces with them and give more power to the states. I’ll hurt your object anti-gay agenda, but you achieve your meta-level agenda which will keep paying off in 20 or 50 years when the gay issue is probably no longer salient enough to care.
If you reframe this as instrumental vs terminal goals, it’s obviously true. If you don’t care about the constitution per se, but only as a means to power and to enabling your policies, and your timeframe is much longer than your opposition, then it’s trivially useful to seek power now and use it over the long term.
But it’s not at all clear that these conditions hold for any humans in the real world. We don’t really have values or goals that are all that well-defined, and we like to think we’re more long-term-oriented than our opposition, but we’re mostly fooling ourselves.
I don’t think this works very well. If you wait until a major party sides with your meta, you could be waiting a long time. (EG, when will 321 voting become a talking point on either side of a presidential election?) And, if you get what you were waiting for, you’re definitely not pulling sideways. That is: you’ll have a tough battle to fight, because there will be a big opposition.
Thanks for the link. I’ve noticed the trend of avoiding the salient issues among those who get things actually done, but I haven’t had a name for it. Pulling the rope sideways—nice.
Correct. This could be countered by having multiple plans and waiting for several possible situations/alliances in parallel.
Why? It’s known that people care a lot about object-level issues and little about meta-level ones (procedural stuff, e.g. constitution). If you get what you want at the meta level, the voters won’t care and politicians thus have little incentive to make it a partisan/salient issue.
Your example was waiting until states rights aligned with one side or other of gay rights. So I understood that you were recommending a strategy where you wait until your pet meta issue is polarized. If you don’t do that, what is left of the strategy you were suggesting?
Either the partisan politics sees a connection between state rights and gay rights, in which case you get your allies but you also inherit a bunch of enemies; or partisan politics fails to see the connection, in which case you don’t get a bunch of allies and you also don’t get enemies.
(Not directly relevant, but: my model is that partisan politics filters for issues which are polarizing, but of the available parties voters tend to choose the most centrist. To appeal to a party, an issue should strike at the other side, but to appeal to voters, you’re aiming for broad appeal.)
Let me try a different example:
Let’s say you are an opposition politician and your pet constitutional issue is to replace majority voting by proportional voting. You believe that FPTP has some genuinely detrimental consequences for the society and you are such a selfless person that you are willing to push for the change even against your best object level interests.
The party currently in power loves majority voting. They love it, however, on the object level: It gives them far larger representation in the parliament than would otherwise be reasonable. 55% voters vote for them, yet they get 80% of MPs. They don’t care about meta level and are not willing to sacrifice object-level interests for it.
The situation is stable for the time being. There’s no “political will” to enact proportional voting. So you wait.
At some point the voting patterns change and the ruling party suddenly faces defeat in the upcoming elections. Now they would do better with the proportional voting system.
They care only about the object level, that is, winning the election, and proportional voting is as good means to win as is majority voting.
You, on the other hand, care only about the meta level. You may lose the upcoming election if proportional system is adopted, but you think it’s still worth it.
Suddenly the two parties are aligned, each side prefers the proportional system, albeit for different reasons. The proportional voting gets adopted.
Are there any trade-offs that make you feel moral satisfaction?
Thinking about taboo trade-offs, e.g. the study where people felt outrage at a hospital administrator who decided not to save a life of a kid who needed an expensive surgery, but rather decided to spend the money on running the hospital.
Isn’t it that any trade-off causes at least some un-satisfaction, which then naturally masquerade as moral outrage?
Isn’t it the case that anyone willing to publicly do a trade-off is going to be hit by a wave of moral outrage? On the other hand, someone who’s willing to promise the impossible, that is, who avoids the trade-off, will just make few people slightly annoyed.