Are there historical precedents for this sort of thing? Arguably so: wildfires of strategic cognition sweeping through a nonprofit or corporation or university as office politics ramps up and factions start forming with strategic goals, competing with each other. Wildfires of strategic cognition sweeping through the brain of a college student who was nonagentic/aimless before but now has bought into some ambitious ideology like EA or communism. Wildfires of strategic cognition sweeping through a network of PCs as a virus hacks through, escalates permissions, etc.
I feel like none of these historical precedents is a perfect match. It might be valuable to think about the ways in which they are similar and different.
I used fire as an analogy for agents in my understanding agency sequence. I’m pleased to see you also found it helpful.
This is maybe the most plausible one I’ve heard. There’s also empires in general, but they’re less plausible as examples—for one thing, I imagine they’re pretty biased towards being a certain way (something like, being set up to channel and aggregrate violence) at the expense of achieving any particular goals.
I feel like none of these historical precedents is a perfect match. It might be valuable to think about the ways in which they are similar and different.
To me a central difference, suggested by the word “strategic”, is that the goal pursuit should be
unboundedly general, and
unboundedly ambitious.
By unboundedly ambitious I mean “has an unbounded ambit” (ambit = “the area went about in; the realm of wandering” https://en.wiktionary.org/wiki/ambit#Etymology ), i.e. its goals induce it to pursue unboundedly much control over the world.
By unboundedly general I mean that it’s universal for optimization channels. For any given channel through which one could optimize, it can learn or recruit understanding to optimize through that channel.
Humans are in a weird liminal state where we have high-ambition-appropriate things (namely, curiosity), but local changes in pre-theoretic “ambition” (e.g. EA, communism) are usually high-ambition-inappropriate (e.g. divesting from basic science in order to invest in military power or whatever).
Isn’t the college student example an example of 1 and 2? I’m thinking of e.g. students who become convinced of classical utilitarianism and then join some Effective Altruist club etc.
I don’t think so, not usually. What happens after they join the EA club? My observations are more consistent with people optimizing (or sometimes performing to appear as though they’re optimizing) through a fairly narrow set of channels. I mean, humans are in a weird liminal state, where we’re just smart enough to have some vague idea that we ought to be able to learn to think better, but not smart and focused enough to get very far with learning to think better. More obviously, there’s anti-interest in biological intelligence enhancement, rather than interest.
After people join EA they generally tend to start applying the optimizer’s mindset to more things than they previously did, in my experience, and also tend to apply optimization towards altruistic impact in a bunch of places that previously they were optimizing for e.g. status or money or whatever.
What are you referring to with biological intelligence enhancement? Do you mean nootropics, or iterated embryo selection, or what?
That seems like a real thing, though I don’t know exactly what it is. I don’t think it’s either unboundedly general or unboundedly ambitious, though. (To be clear, this is isn’t very strongly a critique of anyone; general optimization is really hard, because it’s asking you to explore a very rich space of channels, and acting with unbounded ambition is very fraught because of unilateralism and seeing like a state and creating conflict and so on.) Another example is: how many people have made a deep and empathetic exploration of why [people doing work that hastens AGI] are doing what they are doing? More than zero, I think, but very very few, and it’s a fairly obvious thing to do—it’s just weird and hard and requires not thinking in only a culturally-rationalist-y way and requires recursing a lot on difficulties (or so I suspect; I haven’t done it either). I guess the overall point I’m trying to make here is that the phrase “wildfire of strategicness”, taken at face value, does fit some of your examples; but also I’m wanting to point at another thing, which like “the ultimate wildfire of strategicness”, where it doesn’t “saw off the tree-limb that it climbed out on”, like empires do by harming their subjects, or like social movements do by making their members unable to think for themselves.
What are you referring to with biological intelligence enhancement?
Well, anything that would have large effects. So, not any current nootropics AFAIK, but possibly hormones or other “turning a small key to activate a large/deep mechanism” things.
I’m skeptical that there would be any such small key to activate a large/deep mechanism. Can you give a plausibility argument for why there would be? Why wouldn’t we have evolved to have the key trigger naturally sometimes?
Re the main thread: I guess I agree that EAs aren’t completely totally unboundedly ambitious, but they are certainly closer to that ideal than most people and than they used to be prior to becoming EA. Which is good enough to be a useful case study IMO.
I’m skeptical that there would be any such small key to activate a large/deep mechanism. Can you give a plausibility argument for why there would be?
Not really, because I don’t think it’s that likely to exist. There are other routes much more likely to work though. There’s a bit of plausibility to me, mainly because of the existence of hormones, and generally the existence of genomic regulatory networks.
Why wouldn’t we have evolved to have the key trigger naturally sometimes?
Nice post. Some minor thoughts:
Are there historical precedents for this sort of thing? Arguably so: wildfires of strategic cognition sweeping through a nonprofit or corporation or university as office politics ramps up and factions start forming with strategic goals, competing with each other. Wildfires of strategic cognition sweeping through the brain of a college student who was nonagentic/aimless before but now has bought into some ambitious ideology like EA or communism. Wildfires of strategic cognition sweeping through a network of PCs as a virus hacks through, escalates permissions, etc.
I feel like none of these historical precedents is a perfect match. It might be valuable to think about the ways in which they are similar and different.
I used fire as an analogy for agents in my understanding agency sequence. I’m pleased to see you also found it helpful.
Early corporations, like the East India Company, might be a decent reference class?
This is maybe the most plausible one I’ve heard. There’s also empires in general, but they’re less plausible as examples—for one thing, I imagine they’re pretty biased towards being a certain way (something like, being set up to channel and aggregrate violence) at the expense of achieving any particular goals.
To me a central difference, suggested by the word “strategic”, is that the goal pursuit should be
unboundedly general, and
unboundedly ambitious.
By unboundedly ambitious I mean “has an unbounded ambit” (ambit = “the area went about in; the realm of wandering” https://en.wiktionary.org/wiki/ambit#Etymology ), i.e. its goals induce it to pursue unboundedly much control over the world.
By unboundedly general I mean that it’s universal for optimization channels. For any given channel through which one could optimize, it can learn or recruit understanding to optimize through that channel.
Humans are in a weird liminal state where we have high-ambition-appropriate things (namely, curiosity), but local changes in pre-theoretic “ambition” (e.g. EA, communism) are usually high-ambition-inappropriate (e.g. divesting from basic science in order to invest in military power or whatever).
Isn’t the college student example an example of 1 and 2? I’m thinking of e.g. students who become convinced of classical utilitarianism and then join some Effective Altruist club etc.
I don’t think so, not usually. What happens after they join the EA club? My observations are more consistent with people optimizing (or sometimes performing to appear as though they’re optimizing) through a fairly narrow set of channels. I mean, humans are in a weird liminal state, where we’re just smart enough to have some vague idea that we ought to be able to learn to think better, but not smart and focused enough to get very far with learning to think better. More obviously, there’s anti-interest in biological intelligence enhancement, rather than interest.
After people join EA they generally tend to start applying the optimizer’s mindset to more things than they previously did, in my experience, and also tend to apply optimization towards altruistic impact in a bunch of places that previously they were optimizing for e.g. status or money or whatever.
What are you referring to with biological intelligence enhancement? Do you mean nootropics, or iterated embryo selection, or what?
That seems like a real thing, though I don’t know exactly what it is. I don’t think it’s either unboundedly general or unboundedly ambitious, though. (To be clear, this is isn’t very strongly a critique of anyone; general optimization is really hard, because it’s asking you to explore a very rich space of channels, and acting with unbounded ambition is very fraught because of unilateralism and seeing like a state and creating conflict and so on.) Another example is: how many people have made a deep and empathetic exploration of why [people doing work that hastens AGI] are doing what they are doing? More than zero, I think, but very very few, and it’s a fairly obvious thing to do—it’s just weird and hard and requires not thinking in only a culturally-rationalist-y way and requires recursing a lot on difficulties (or so I suspect; I haven’t done it either). I guess the overall point I’m trying to make here is that the phrase “wildfire of strategicness”, taken at face value, does fit some of your examples; but also I’m wanting to point at another thing, which like “the ultimate wildfire of strategicness”, where it doesn’t “saw off the tree-limb that it climbed out on”, like empires do by harming their subjects, or like social movements do by making their members unable to think for themselves.
Well, anything that would have large effects. So, not any current nootropics AFAIK, but possibly hormones or other “turning a small key to activate a large/deep mechanism” things.
I’m skeptical that there would be any such small key to activate a large/deep mechanism. Can you give a plausibility argument for why there would be? Why wouldn’t we have evolved to have the key trigger naturally sometimes?
Re the main thread: I guess I agree that EAs aren’t completely totally unboundedly ambitious, but they are certainly closer to that ideal than most people and than they used to be prior to becoming EA. Which is good enough to be a useful case study IMO.
Not really, because I don’t think it’s that likely to exist. There are other routes much more likely to work though. There’s a bit of plausibility to me, mainly because of the existence of hormones, and generally the existence of genomic regulatory networks.
We do; they’re active in childhood. I think.