I write and talk about game theory, moral philosophy, ethical economics and artificial intelligence—focused on non-zero-sum games and their importance in solving the world’s problems.
I have admitted I am wrong at least 10 times on the internet.
I write and talk about game theory, moral philosophy, ethical economics and artificial intelligence—focused on non-zero-sum games and their importance in solving the world’s problems.
I have admitted I am wrong at least 10 times on the internet.
I agree, it’s not coming across at all well at present, needs a rewrite, give me a couple of weeks :)
I take your point, I think it needs a rewrite, I have not been nearly clear enough, and your notes are helpful in pointing me to areas I need to clarify. I have replies to your points here, but I should get my ducks-in-a-row before making them, so I don’t end up contradicting myself. Thanks for your comment.
Thanks Adele,
I appreciate your comment, and will take some time to process it and read the links. This is definitely not an area I have any expertise in and I’m not meaning to propose that this is how gravity actually works in reality—it’s more an illustration that something gravity-like, and elements that are like atoms or systems etc can arise out of very simple and random rules without the need for fine-tuning, and that constants (or regularities) can be arrived at by means of natural equilibria rather than being lucked upon, or designed.
But I probably haven’t made this clear. It was something I actually wrote I while ago and have only recently published here, so it may require a re-write, clarifying my intention and incorporating the points you’ve raised. You’re the first to provide a rigorous rebuttal for it so far, so I appreciate you lending your expertise in this respect.
Ah, yes now you’ve jogged my memory about all the attempted expansionism in between. You make a solid case that they didn’t step outside of the expand-or-die dichotomy willingly.
The point I’m trying to make is that the third option was there (perhaps it wasn’t feasible before WWII, I’m not sure), but the third option (mutually beneficial trade and cooperation) ended up sustaining Japan from WWII to the present without the need for expansion.
The point of the post is that often there is often a third option outside of expand-or-die, and it’s worth questioning what that could be in any given problem. But thanks for all the very good points—I absolutely agree with you that there have been civilisations that have had to, or have seemed to have to, expand in order to survive. Thanks for your well-considered points, and the spot-on history (apologies for the patchiness of mine).
I take your point, it requires everyone to behave themselves (I’m actually familiar with this history, I went down a Japanese history rabbit hole about this time last year, fascinating), but if we continue with Japan we find that due to a third option of trading-with-other-nations (beginning with the Meiji restoration I think...) Japan continues to operate as a sovereign nation without the need to expand (with the exception of its ill-fated and frankly bonkers attempt to expand in WWII...).
So, again it’s good to look for answers outside the paradigm of expand or die. Cooperation and trade are non-zero-sum options that are available in the messy and therefore less theoretically-bound real world, as opposed to a formal game theory scenario.
But I think the expansionist trap you describe is a real thing and an important cautionary tale, which could perhaps be applied to our modern perpetual growth model of economics and its attendant consumerism (here I am sounding like a first year sociology major).
True, though there are many examples of conquerors who expanded for the sake of an expansionist philosophy or glory: Alexander the Great, The Mongols, The Assyrians, The Crusades… off the top of my head. The Germans in WWII definitely justified expansion for the sake of living space (Lebensraum), so there are examples of expansion at least being justified in the way you mention. And of course colonialism is justified in the same way.
I think what you’re saying is logical, but the example, being metaphorical, is more to illustrate that we should question critically what it is we actually want before conceding a price to pay for it. As you say, it might be necessary, but it also might not.
Humans profess to care about everyone a lot more than they really do, because doing that (and even thinking that) is strategically useful.
A bit bleak… but yes, your logic checks out, and hence why coordination problems are so sticky (I did sort of claim to solve the problem didn’t I? Oops, back to the drawing board).
I love this silly side of Yudkowsky.
Thanks for your comment, I appreciate your points, and see that Yudkowsky appreciates some use of higher-level abstractions as a pragmatic tool that is not erased by reductionism. But I still feel like you’re being a bit too charitable. I re-read the ‘it’s okay to use ‘emerge”’ parts several times, and as I understand it, he’s not meaning to refer to a higher-level abstraction, he’s using it in the general sense “whatever byproduct comes from this” in which case it would be just as meaningful to say “heat emerges from the body” which does not reflect any definition of emergence as a higher-level abstraction. I think the issue comes into focus with your final point:
The phrase “intelligence is emergent” as what intelligence is doesn’t predict anything and is a blank phrase, this is what he was opposed to.
But it is not correct to say that acknowledging intelligence as emergent doesn’t help us predict anything. If emergence can be described as a pattern that happens across different realms then it can help to predict things, through the use of analogy. If for instance we can see that neurones are selected and strengthened based on use, we can transfer some of our knowledge about natural selection in biological evolution to provide fruitful questions to ask, and research to do, on neural evolution. If we understand that an emergent system has reached equilibrium, it can help us to ask useful questions about what new systems might emerge on top of that system, questions we might not otherwise ask if we were not to recognise the shared pattern.
A question I often ask myself is “If the world itself is to become increasingly organised, at some point do we cease to be autonomous entities an on a floating rock, and become instead like automatic cells within a new vector of autonomy (the planet as super-organism)”. This question only comes about if we acknowledge that the world itself is subject to the same sorts of emergent processes that humans and other animals are (although not exactly, a planet doesn’t have much of a social life, and that could be essential to autonomy). I find these predictions based on principles of emergence interesting and potentially consequential.
Sorry about my lack of clarity: By “complex” I mean “intricately ordered” rather than the simple disorder generally expected of an entropic process. To taboo both this and alignment as “following the same pattern as”:
I’d like to make the case that emergent complexity is where…
a whole system is more intricately ordered than the sum of its parts
a system follows more closely the pattern of a macroscopic phenomenon than it follows the pattern of any of its component parts.
By a macroscopic phenomenon, I mean any (or all) of the following:
1. Another physical feature of the world which it fits to, like roads aligning with a map and its terrain (and obstacles).
2. Another instance of what appears to fulfil a similar purpose despite entirely different paths to get there or materials (like with convergence)
3. A conceptual feature of the world, like a purpose or function.
So, we can more readily understand an emergent phenomenon in relation to some other macroscopic phenomenon than we can were we to merely inspect the cells in isolation. In other words, there is usefulness identifying the 20+ varieties of eyes as “eyes” (2) even though they are not the same at all, on a cellular level. It is also meaningful to understand that they perform a function or purpose (3), and that they fit the physical world (by reflecting it relatively accurately) (1).
This is an error I see people making over and over… That different theory may be a useful new development! But that is what it is, not a defence of the original theory.
I think this is the crux of our disagreement. Yudkowsky was denying the usefulness of a term entirely because some people use it vaguely. I am trying to provide a less vague and more useful definition of the term—not to say Yudkowsky is unjustified in criticising the use of the term, but that he is unjustified in writing it off completely because of some superficial flaws in presentation, or some unrefined aspects of the concept.
An error that I see happening often is throwing out the baby with the bathwater, and I’ve read people on Less Wrong (even Yudkowsky I think, though I can’t remember where, sorry) write in support of ideas like “Error Correction” as a virtue and Bayesian updating whereby we take criticisms as an opportunity to refine a concept rather than writing it off completely.
I am trying to take part in that process, and I think Yudkowsky would have been better served had he done the same—suggested a better definition that is useful.
Thanks for your comment, but I think it misses the mark somewhat.
While googling to find someone who expresses a straw-man position in the real-world is a form of straw-manning itself, this comment goes further to misrepresent a colloquial use of the word “magical” to mean literal (supernatural) “magic”.
While I haven’t read the book referenced, the quotes provided do not give enough context to claim that the author doesn’t mean what he obviously means (to me at least) that the development of an emergent phenomena seems magical… does it not seem magical? Seeming magical is not a claim that something is not reducible to its component parts, it just means it’s not immediately reducible without some thorough investigation into the mechanisms at work. Part and parcel of the definition of emergence is that it is a non-magical (bottom-up) way of understanding phenomena that seem remarkable (magical), which is why he uses a clearly non-supernatural system like an anthill to illustrate it.
Despite all this, the purpose of the post was to give a clear definition of emergence that doesn’t fall into Yudkowsky’s strawman—not a claim that no one has ever used the word loosely in the past. As conceded in the preamble (paraphrasing) I don’t expect something written 18 years ago to perfectly reflect the conceptual landscape of today.
Thanks, and yes, I did scan over the comments when I first read the article, and noted many good points, but when I decided to write I wanted to focus on this particular angle and not get lost in an encyclopaedia defences. I’m very much in the same camp as the first comment you quote.
I appreciate your take on Yudkowsky’s overreach, and the historical context. That helps me understand his position better.
The semantic stop-sign is interesting, I do appreciate Yudkowsky coming up with these handy handles for ideas that often crop up in discussion. Your two examples make me think of the fallacy of composition, in that emergence seems to be a key feature of reality that, at least in part, makes the fallacy of composition a fallacy.
Thanks for your well considered comment.
Could you explain what exactly you mean by “complex” here?
So, here I’m just stating the requirement that the system adds complexity, and that it is not merely categorically different. So, heat, for instance could be seen is categorically different to the process that it “emerged” from, but it would not qualify as “emergent” it is clearly entropic, reducing complexity. Whereas an immune system is built on top of an organism’s complexity, it is a more complex system because it includes all the complexity of the system it emerged from + its own complexity (or to use your code example, all the base code plus the new branch).
The second part is more important to my particular way of understanding emergence.
What does “aligned” mean in this context?
I think I could potentially make this clearer as it seems “alignment” comes with a lot of baggage, and has potentially been worn out in general (vague) usage, making its correct usage seem obscure and difficult to place. By “aligned with” I mean not merely related to but, “following the same pattern as”, that pattern might be a function it plays or a physical or conceptual shape that is similar. So, the slime mold and the Tokyo rail system share a similar shape, they have converged on a similar outcome because they are aligned with a similar pattern (efficiency of transport given a particular map).
Cells that a toe consists of are different than cells that a testicle or an eye consist of.
I think we’re in agreement here, my point is that the eye or testicle perform a (macroscopic) function, the cells they are made of are less important than the function—of the 20+ different varieties of eyes, none of them are made of the same cells, but it still makes sense to call them eyes, because they align with the function, eyes are essentially cell-agnostic, as long as they converge on a function.
Again, thanks for the response, I’ll try to think of some edits that help make these aspects clearer in the text.
Thanks Jonas, that’s really nice of you to say, and a great suggestion. I’ve had a look at doing sequences here. Now that I have more content, I’ll take your request and run with it.
For now, over on the site I have the posts broken up into curated categories that work as rudimentary sequences, if you’d like to check them out. Appreciate your feedback!
Thanks? (Does that mean it’s well structured?) You’re the second person to have said this. The illustrations are original, as is all the writing.
As I mentioned to the other person who raised this concern, the blog I write (the source) is an outlet for my own ideas, using chat would sort of defeat the purpose.
I can assure you that the words and images are all original, I’m quite capable of vagueifying something myself—I don’t have a content quota to make, just trying to present ideas I’ve had, so it would be quite antithetical to the project to get chat to write it.
By “aligned” I’m not meaning “related to”, I mean “maps to the same conceptual shape”, “correlated with” or “analogous to”. So the nutrient pathways of slime molds are aligned with the Tokyo rail system, but they are not related (other than by sharing an alignment with a pattern). Whereas peanut butter is related to toast, but it’s not aligned with it.
But I appreciate the feedback, if you’re able to point to something specifically that’s vague, I’ll definitely get in there and tighten it up.
The “Soldier Mindset” flag is a fair enough call, I guess this could be seen as persuasion (a no-no). Perhaps, I would rather frame it as bypassing emotions (that are acting as barriers to understanding) in order to connect. Correctly understanding the other person’s position, or core beliefs, you actually have to let go of your own biases, and in the process might actually become more open to their position.
An idea I’m workshopping that occurred to me while developing the Contagious Beliefs Simulation.
Cognitive Bias is A Feature Not a Bug:
Understanding that cognitive bias is a feature not a bug, is key to negotiation and changing minds. I find that in arguments, I only really convince someone by relating my case to the values they find important, sometimes those are the same as mine, which makes it easy, if they are clearly different I try to understand their core values. Sometimes people will reject this approach posturing as an objective rational agent, at this point I treat “rationality” as their cognitive bias, because we are not rational agents, we are irrational agents who are driven by desires over which we have no control, and for which the goal is not truth, but the reduction of mental tension and uncertainty, social acceptance and cognitive coherence—a measure of how well new information aligns with our current knowledge and views (the opposite of cognitive dissonance).
In this way bias is deleterious, so why has it survived natural selection? Because it is highly adaptive, and cognitively efficient (cheap). And when we think about it, if we discount the existence of a designer it’s also logically impossible for it to be otherwise, unless it was hardwired (like imprinting instincts in animals) how else would we get this knowledge, it would be entirely inflexible, making us capable, but not intelligent, like a dog’s supremely powerful nose that it uses to sniff other dogs’ butts. It is our ability to use previous knowledge to assess and adopt or reject incoming information that is the core mechanism of intelligence.
So, when faced with someone you are trying to convince of something, if they don’t already agree with you, they might have some important previous knowledge you need to help them square with this new info.
You’re right, but the term teme covers much more than that, for instance it’s also relevant to the development of AI agents, and AI self-editing / self-improvement. Although, identifying these systems as virus-like (because of the replication mechanism) might be instructive (as a red-flag).