Thoughts about additional ramifications of this (not optimized much for readability).
Background on Epistemic Effort:
I’m of the belief that if I’m proposing a major idea that I’m hoping people will take action on, I should think seriously about the idea for… N minutes. N varies. But the key is tolook into the dark, accounting for positive bias: What the ways an idea might not succeed? What consequences might it have that didn’t fit as prettily into the narrative I was crafting?
In my personal experience, this takes something like 30 minutes at least. In my original Epistemic Effort post I suggested 5 minutes, but I’ve found 15 minutes is barely enough to finish searching through existing thoughts already in the back of my mind. 30 Minutes is how long it takes to get started trying to think multiple steps into the future and general novel concerns.
This process is somehow very different from the process that generates my original blogpost.
I’ve noticed that now that I know it takes at least 30 minutes, I’m a lot more hesitant to even try to take 5. (I almost went ahead and posted this post without doing so, and then flagging “didn’t think for 5 minutes about how it might fail”, and then that felt embarrassing and it seemed important enough to do so that I went ahead and did it. But it might bode poorly for the idea)
“What About the Babyeaters?”
A failure mode of canonical Archipelago-ism is altruism + “think of the children.” If The Other Group is focused on something actively harmful, and you don’t trust people to be able to leave, or you think that the harms take root in childhood before people even have a chance to choose a civilization for themselves (say, secondhand smoke in the privacy of people’s own homes, or enforcing strict gender norms from early childhood...
...then the “to each their own” concept advocated here doesn’t sound as persuasive.
This is a different issue with Social Subculture Archipelagism. Even if there are no literal children, you may worry about pernicious, harmful ideas taking root that you expect to be memetically successful even though they are dangerous.
It’s very conceivable to me that the outcome of a “successful” Archipelagism taking root would be Bad Ideas Winning and the whole thing to end up net negative.
My current take is that some kind of “Bad Ideas Being Successful” thing is likely to happen, but that the overall Net Harm/Good will be positive. I don’t really have a justification for that. Just a feeling.
Observations of what’s happened so far....
Competing Norms
During the Hufflepuff Unconference, I ran into issues of how norms collide. I wanted people to either firmly commit to coming, or to not come. This failed in a few ways:
I held the event in a public space known to the surrounding community, which meant people with no idea of the norms ended up coming regardless.
Som people who were scrupulous and recognized that they couldn’t follow the rules chose not to come. Some people who didn’t care about respecting the rules just came anyway, creating a mild asshole filter. (I only have evidence of this affecting maybe 2-4 people total in either direction, but it was noticeable)
People who earnestly wanted to come and follow the rules ran into an issue where other people who weren’t firmly committing to things prevented them from making a firm commitment. (i.e. someone had a long lost friend visiting for the week, and the friend wasn’t sure of their schedule, and Person-A definitely wanted to make time for their friend if they could, but definitely wanted to come to the Unconference if their friend would be busy).
This last issue eventually resulted in my changing the rules to “please respond with an explicit estimate of how likely you were to come, and some of the decision-relevant things that might affect whether you come or not.” I think this worked better.
I don’t have evidence that this is that big a problem (I tried one experiment, it didn’t work as well as I wanted, I came up with a solution. System working as intended). But it implies future issues that one might not foresee
It’s Hard To Make Spaces
I have attempted to create a few spaces (at different times in different cities). And in general, it’s harder to create a new space dedicated to a particular thing than I’d have thought (in particular, finding enough people who care about a thing to seriously try out novel norms). In New York, it was hard because there weren’t that many people. In the Bay Area, it’s been harder-than-I-expected because although there ARE enough people that I expect to flesh out subcultures, those people have more things competing for their attention.
I (currently) expect to be able make things happen, but it won’t be as easy as hanging out a shingle. 45 people came to the Hufflepuff Unconference, but I spent 2 months and several blogposts hyping that up. (More recently, I tried to get an Epistemic Unconference happening that’d have a different set of norms, and I couldn’t get a critical mass of people interested. I didn’t try very hard—it’s Solstice Season and I need to conserve my “hey everyone let’s all do an effortful thing!” energy for that. But this clarified the degree of difficulty I’d have attracting interest in things)
I expect to have an easier-than-average time getting people interested in things, and it to still require a couple enthusiasm-driving-blogposts per individual thing.
So with that in mind...
Predictions and Gears-Thinking
(IMO this is the hard part of a “think seriously for 30 minutes” thing. This will be most stream-of-conscious-y of the sections)
First, I guess my most obvious prediction is “doing this at all is harder than I was hoping, and barely anything happens.”
Futher predictions are sort of weird, since the act of saying some of them out loud might make them come true. (It occurs to me I could secretly make predictions and see if anything happens in a year. I may do that but am not doing it yet)
I notice that the default way my brain is attempting to generate thoughts here feels like the Social Modesty Module running.
The second thing my brain’s doing is listing the things I hope will happen, and then see how my internal-surprise-o-meter feels about it.
The third thing, that I will actually record here, is listing things that seem like they might happen, that I want to happen or am afraid might happen, and not list my particular predictions yet but at least get the predictable-in-theory-ideas out there:
How many people will actually attempt to start a subgroup or change norms at one they already control as a result of this blogpost?
How many people will end up involved with those subgroups?
How many groups will happen secretly or privately? How many public?
How many will try experiments past the Valley of Discomfort?
In a year, and in 5 years, how many people will feel that those subgroups were useful?
How many novel social norms will be developed?
How many times do I expect that I’ll be surprised by something that happens as a result of this blogpost?
How many times do I expect that I’ll be confused by something that happens as a result of this blogpost?
How many attempted social norms will clash in actively bad ways?
Will I end up regretting this blogpost (separate questions for “will I think it turned out not to work but was still the right thing to push for at the time”, and “will I think, in principle, that I should have spent my time and social capital doing something else?”
Will people end up more socially isolated, less, or, neutral, as a result of this class of experiment?
(Huh, result of this: “generate hypothesis you can test without stressing about actually deciding on your predictions” was a suprisingly useful technique—I notice with several of the above that I have a least some intuitive sense of how it will play out, and in others, I notice I expect things to fail by default, but that I immediately see ways to make them less failure prone, if I chose to spend the time doing so)
Thoughts about additional ramifications of this (not optimized much for readability).
Background on Epistemic Effort:
I’m of the belief that if I’m proposing a major idea that I’m hoping people will take action on, I should think seriously about the idea for… N minutes. N varies. But the key is to look into the dark, accounting for positive bias: What the ways an idea might not succeed? What consequences might it have that didn’t fit as prettily into the narrative I was crafting?
In my personal experience, this takes something like 30 minutes at least. In my original Epistemic Effort post I suggested 5 minutes, but I’ve found 15 minutes is barely enough to finish searching through existing thoughts already in the back of my mind. 30 Minutes is how long it takes to get started trying to think multiple steps into the future and general novel concerns.
This process is somehow very different from the process that generates my original blogpost.
I’ve noticed that now that I know it takes at least 30 minutes, I’m a lot more hesitant to even try to take 5. (I almost went ahead and posted this post without doing so, and then flagging “didn’t think for 5 minutes about how it might fail”, and then that felt embarrassing and it seemed important enough to do so that I went ahead and did it. But it might bode poorly for the idea)
“What About the Babyeaters?”
A failure mode of canonical Archipelago-ism is altruism + “think of the children.” If The Other Group is focused on something actively harmful, and you don’t trust people to be able to leave, or you think that the harms take root in childhood before people even have a chance to choose a civilization for themselves (say, secondhand smoke in the privacy of people’s own homes, or enforcing strict gender norms from early childhood...
...then the “to each their own” concept advocated here doesn’t sound as persuasive.
This is a different issue with Social Subculture Archipelagism. Even if there are no literal children, you may worry about pernicious, harmful ideas taking root that you expect to be memetically successful even though they are dangerous.
It’s very conceivable to me that the outcome of a “successful” Archipelagism taking root would be Bad Ideas Winning and the whole thing to end up net negative.
My current take is that some kind of “Bad Ideas Being Successful” thing is likely to happen, but that the overall Net Harm/Good will be positive. I don’t really have a justification for that. Just a feeling.
Observations of what’s happened so far....
Competing Norms
During the Hufflepuff Unconference, I ran into issues of how norms collide. I wanted people to either firmly commit to coming, or to not come. This failed in a few ways:
I held the event in a public space known to the surrounding community, which meant people with no idea of the norms ended up coming regardless.
Som people who were scrupulous and recognized that they couldn’t follow the rules chose not to come. Some people who didn’t care about respecting the rules just came anyway, creating a mild asshole filter. (I only have evidence of this affecting maybe 2-4 people total in either direction, but it was noticeable)
People who earnestly wanted to come and follow the rules ran into an issue where other people who weren’t firmly committing to things prevented them from making a firm commitment. (i.e. someone had a long lost friend visiting for the week, and the friend wasn’t sure of their schedule, and Person-A definitely wanted to make time for their friend if they could, but definitely wanted to come to the Unconference if their friend would be busy).
This last issue eventually resulted in my changing the rules to “please respond with an explicit estimate of how likely you were to come, and some of the decision-relevant things that might affect whether you come or not.” I think this worked better.
I don’t have evidence that this is that big a problem (I tried one experiment, it didn’t work as well as I wanted, I came up with a solution. System working as intended). But it implies future issues that one might not foresee
It’s Hard To Make Spaces
I have attempted to create a few spaces (at different times in different cities). And in general, it’s harder to create a new space dedicated to a particular thing than I’d have thought (in particular, finding enough people who care about a thing to seriously try out novel norms). In New York, it was hard because there weren’t that many people. In the Bay Area, it’s been harder-than-I-expected because although there ARE enough people that I expect to flesh out subcultures, those people have more things competing for their attention.
I (currently) expect to be able make things happen, but it won’t be as easy as hanging out a shingle. 45 people came to the Hufflepuff Unconference, but I spent 2 months and several blogposts hyping that up. (More recently, I tried to get an Epistemic Unconference happening that’d have a different set of norms, and I couldn’t get a critical mass of people interested. I didn’t try very hard—it’s Solstice Season and I need to conserve my “hey everyone let’s all do an effortful thing!” energy for that. But this clarified the degree of difficulty I’d have attracting interest in things)
I expect to have an easier-than-average time getting people interested in things, and it to still require a couple enthusiasm-driving-blogposts per individual thing.
So with that in mind...
Predictions and Gears-Thinking
(IMO this is the hard part of a “think seriously for 30 minutes” thing. This will be most stream-of-conscious-y of the sections)
First, I guess my most obvious prediction is “doing this at all is harder than I was hoping, and barely anything happens.”
Futher predictions are sort of weird, since the act of saying some of them out loud might make them come true. (It occurs to me I could secretly make predictions and see if anything happens in a year. I may do that but am not doing it yet)
I notice that the default way my brain is attempting to generate thoughts here feels like the Social Modesty Module running.
The second thing my brain’s doing is listing the things I hope will happen, and then see how my internal-surprise-o-meter feels about it.
The third thing, that I will actually record here, is listing things that seem like they might happen, that I want to happen or am afraid might happen, and not list my particular predictions yet but at least get the predictable-in-theory-ideas out there:
How many people will actually attempt to start a subgroup or change norms at one they already control as a result of this blogpost?
How many people will end up involved with those subgroups?
How many groups will happen secretly or privately? How many public?
How many will try experiments past the Valley of Discomfort?
In a year, and in 5 years, how many people will feel that those subgroups were useful?
How many novel social norms will be developed?
How many times do I expect that I’ll be surprised by something that happens as a result of this blogpost?
How many times do I expect that I’ll be confused by something that happens as a result of this blogpost?
How many attempted social norms will clash in actively bad ways?
Will I end up regretting this blogpost (separate questions for “will I think it turned out not to work but was still the right thing to push for at the time”, and “will I think, in principle, that I should have spent my time and social capital doing something else?”
Will people end up more socially isolated, less, or, neutral, as a result of this class of experiment?
(Huh, result of this: “generate hypothesis you can test without stressing about actually deciding on your predictions” was a suprisingly useful technique—I notice with several of the above that I have a least some intuitive sense of how it will play out, and in others, I notice I expect things to fail by default, but that I immediately see ways to make them less failure prone, if I chose to spend the time doing so)