The Cluster Structure of Thingspace
The notion of a “configuration space” is a way of translating object descriptions into object positions. It may seem like blue is “closer” to blue-green than to red, but how much closer? It’s hard to answer that question by just staring at the colors. But it helps to know that the (proportional) color coordinates in RGB are 0:0:5, 0:3:2 and 5:0:0. It would be even clearer if plotted on a 3D graph.
In the same way, you can see a robin as a robin—brown tail, red breast, standard robin shape, maximum flying speed when unladen, its species-typical DNA and individual alleles. Or you could see a robin as a single point in a configuration space whose dimensions described everything we knew, or could know, about the robin.
A robin is bigger than a virus, and smaller than an aircraft carrier—that might be the “volume” dimension. Likewise a robin weighs more than a hydrogen atom, and less than a galaxy; that might be the “mass” dimension. Different robins will have strong correlations between “volume” and “mass”, so the robin-points will be lined up in a fairly linear string, in those two dimensions—but the correlation won’t be exact, so we do need two separate dimensions.
This is the benefit of viewing robins as points in space: You couldn’t see the linear lineup as easily if you were just imagining the robins as cute little wing-flapping creatures.
A robin’s DNA is a highly multidimensional variable, but you can still think of it as part of a robin’s location in thingspace—millions of quaternary coordinates, one coordinate for each DNA base—or maybe a more sophisticated view that . The shape of the robin, and its color (surface reflectance), you can likewise think of as part of the robin’s position in thingspace, even though they aren’t single dimensions.
Just like the coordinate point 0:0:5 contains the same information as the actual HTML color blue, we shouldn’t actually lose information when we see robins as points in space. We believe the same statement about the robin’s mass whether we visualize a robin balancing the scales opposite a 0.07-kilogram weight, or a robin-point with a mass-coordinate of +70.
We can even imagine a configuration space with one or more dimensions for every distinct characteristic of an object, so that the position of an object’s point in this space corresponds to all the information in the real object itself. Rather redundantly represented, too—dimensions would include the mass, the volume, and the density.
If you think that’s extravagant, quantum physicists use an infinite-dimensional configuration space, and a single point in that space describes the location of every particle in the universe. So we’re actually being comparatively conservative in our visualization of thingspace—a point in thingspace describes just one object, not the entire universe.
If we’re not sure of the robin’s exact mass and volume, then we can think of a little cloud in thingspace, a volume of uncertainty, within which the robin might be. The density of the cloud is the density of our belief that the robin has that particular mass and volume. If you’re more sure of the robin’s density than of its mass and volume, your probability-cloud will be highly concentrated in the density dimension, and concentrated around a slanting line in the subspace of mass/volume. (Indeed, the cloud here is actually a surface, because of the relation VD = M.)
“Radial categories” are how cognitive psychologists describe the non-Aristotelian boundaries of words. The central “mother” conceives her child, gives birth to it, and supports it. Is an egg donor who never sees her child a mother? She is the “genetic mother”. What about a woman who is implanted with a foreign embryo and bears it to term? She is a “surrogate mother”. And the woman who raises a child that isn’t hers genetically? Why, she’s an “adoptive mother”. The Aristotelian syllogism would run, “Humans have ten fingers, Fred has nine fingers, therefore Fred is not a human” but the way we actually think is “Humans have ten fingers, Fred is a human, therefore Fred is a ‘nine-fingered human’.”
We can think about the radial-ness of categories in intensional terms, as described above—properties that are usually present, but optionally absent. If we thought about the intension of the word “mother”, it might be like a distributed glow in thingspace, a glow whose intensity matches the degree to which that volume of thingspace matches the category “mother”. The glow is concentrated in the center of genetics and birth and child-raising; the volume of egg donors would also glow, but less brightly.
Or we can think about the radial-ness of categories extensionally. Suppose we mapped all the birds in the world into thingspace, using a distance metric that corresponds as well as possible to perceived similarity in humans: A robin is more similar to another robin, than either is similar to a pigeon, but robins and pigeons are all more similar to each other than either is to a penguin, etcetera.
Then the center of all birdness would be densely populated by many neighboring tight clusters, robins and sparrows and canaries and pigeons and many other species. Eagles and falcons and other large predatory birds would occupy a nearby cluster. Penguins would be in a more distant cluster, and likewise chickens and ostriches.
The result might look, indeed, something like an astronomical cluster: many galaxies orbiting the center, and a few outliers.
Or we could think simultaneously about both the intension of the cognitive category “bird”, and its extension in real-world birds: The central clusters of robins and sparrows glowing brightly with highly typical birdness; satellite clusters of ostriches and penguins glowing more dimly with atypical birdness, and Abraham Lincoln a few megaparsecs away and glowing not at all.
I prefer that last visualization—the glowing points—because as I see it, the structure of the cognitive intension followed from the extensional cluster structure. First came the structure-in-the-world, the empirical distribution of birds over thingspace; then, by observing it, we formed a category whose intensional glow roughly overlays this structure.
This gives us yet another view of why words are not Aristotelian classes: the empirical clustered structure of the real universe is not so crystalline. A natural cluster, a group of things highly similar to each other, may have no set of necessary and sufficient properties—no set of characteristics that all group members have, and no non-members have.
But even if a category is irrecoverably blurry and bumpy, there’s no need to panic. I would not object if someone said that birds are “feathered flying things”. But penguins don’t fly!—well, fine. The usual rule has an exception; it’s not the end of the world. Definitions can’t be expected to exactly match the empirical structure of thingspace in any event, because the map is smaller and much less complicated than the territory. The point of the definition “feathered flying things” is to lead the listener to the bird cluster, not to give a total description of every existing bird down to the molecular level.
When you draw a boundary around a group of extensional points empirically clustered in thingspace, you may find at least one exception to every simple intensional rule you can invent.
But if a definition works well enough in practice to point out the intended empirical cluster, objecting to it may justly be called “nitpicking”.
- Diseased thinking: dissolving questions about disease by 30 May 2010 21:16 UTC; 529 points) (
- 37 Ways That Words Can Be Wrong by 6 Mar 2008 5:09 UTC; 229 points) (
- Is being sexy for your homies? by 13 Dec 2023 20:37 UTC; 169 points) (
- Comment on “Endogenous Epistemic Factionalization” by 20 May 2020 18:04 UTC; 151 points) (
- The First Sample Gives the Most Information by 24 Dec 2020 20:39 UTC; 150 points) (
- Defending the non-central fallacy by 9 Mar 2021 21:42 UTC; 137 points) (
- Where to Draw the Boundaries? by 13 Apr 2019 21:34 UTC; 124 points) (
- In praise of fake frameworks by 11 Jul 2017 2:12 UTC; 116 points) (
- EA considerations regarding increasing political polarization by 19 Jun 2020 8:25 UTC; 109 points) (EA Forum;
- Towards a Less Bullshit Model of Semantics by 17 Jun 2024 15:51 UTC; 94 points) (
- Unnatural Categories Are Optimized for Deception by 8 Jan 2021 20:54 UTC; 89 points) (
- What should experienced rationalists know? by 13 Oct 2020 17:32 UTC; 88 points) (
- Natural Latents: The Concepts by 20 Mar 2024 18:21 UTC; 87 points) (
- SotW: Be Specific by 3 Apr 2012 6:11 UTC; 86 points) (
- Arguing “By Definition” by 20 Feb 2008 23:37 UTC; 85 points) (
- Where to Draw the Boundary? by 21 Feb 2008 19:14 UTC; 82 points) (
- Numeracy neglect—A personal postmortem by 27 Sep 2020 15:12 UTC; 81 points) (
- How to pick your categories by 11 Nov 2010 15:13 UTC; 78 points) (
- Selling Nonapples by 13 Nov 2008 20:10 UTC; 76 points) (
- [Valence series] 3. Valence & Beliefs by 11 Dec 2023 20:21 UTC; 75 points) (
- Categorizing Has Consequences by 19 Feb 2008 1:40 UTC; 74 points) (
- Mutual Information, and Density in Thingspace by 23 Feb 2008 19:14 UTC; 69 points) (
- Plan for mediocre alignment of brain-like [model-based RL] AGI by 13 Mar 2023 14:11 UTC; 67 points) (
- Superexponential Conceptspace, and Simple Words by 24 Feb 2008 23:59 UTC; 65 points) (
- Pluralistic Moral Reductionism by 1 Jun 2011 0:59 UTC; 64 points) (
- Neural Categories by 10 Feb 2008 0:33 UTC; 61 points) (
- Reductive Reference by 3 Apr 2008 1:37 UTC; 59 points) (
- What I’d change about different philosophy fields by 8 Mar 2021 18:25 UTC; 55 points) (
- Challenges to Yudkowsky’s Pronoun Reform Proposal by 13 Mar 2022 20:38 UTC; 50 points) (
- The Game of Masks by 27 Apr 2022 18:03 UTC; 50 points) (
- The possible shared Craft of deliberate Lexicogenesis by 20 May 2023 5:56 UTC; 49 points) (
- Proposal: Consider not using distance-direction-dimension words in abstract discussions by 9 Aug 2022 20:44 UTC; 46 points) (
- Reply to Nate Soares on Dolphins by 10 Jun 2021 4:53 UTC; 46 points) (
- Why I think the Foundational Research Institute should rethink its approach by 20 Jul 2017 20:46 UTC; 45 points) (EA Forum;
- The Nature of Logic by 15 Nov 2008 6:20 UTC; 42 points) (
- Classical Configuration Spaces by 15 Apr 2008 8:40 UTC; 41 points) (
- The Natural Abstraction Hypothesis: Implications and Evidence by 14 Dec 2021 23:14 UTC; 39 points) (
- No Logical Positivist I by 4 Aug 2008 1:06 UTC; 39 points) (
- Unnatural Categories by 24 Aug 2008 1:00 UTC; 37 points) (
- Blood Is Thicker Than Water 🐬 by 28 Sep 2021 3:21 UTC; 37 points) (
- Being Foreign and Being Sane by 25 May 2013 0:58 UTC; 35 points) (
- AXRP Episode 15 - Natural Abstractions with John Wentworth by 23 May 2022 5:40 UTC; 34 points) (
- The Univariate Fallacy by 15 Jun 2019 21:43 UTC; 34 points) (
- Book Review: Philosophical Investigations by Wittgenstein by 12 Oct 2021 20:14 UTC; 34 points) (
- Fuzzy Boundaries, Real Concepts by 7 May 2018 3:39 UTC; 27 points) (
- CFAR 2017 Retrospective by 19 Dec 2017 19:38 UTC; 26 points) (
- Why officers vs. enlisted? by 30 Oct 2013 20:14 UTC; 23 points) (
- Naming and pointer thickness by 28 Apr 2021 6:35 UTC; 22 points) (
- How can I strategically write a complex bestseller? (4HS001) by 12 Jun 2013 8:47 UTC; 19 points) (
- Lighthaven Sequences Reading Group #6 (Tuesday 10/15) by 10 Oct 2024 20:34 UTC; 19 points) (
- Marginal Cases, On Trial by 22 Jun 2022 20:06 UTC; 18 points) (EA Forum;
- Introduction to Connectionist Modelling of Cognitive Processes: a chapter by chapter review by 30 Sep 2012 4:24 UTC; 18 points) (
- 21 Jun 2021 16:53 UTC; 18 points) 's comment on Four Components of Audacity by (
- Declustering, reclustering, and filling in thingspace by 6 Dec 2021 20:53 UTC; 16 points) (
- 7 May 2009 12:49 UTC; 15 points) 's comment on Hardened Problems Make Brittle Models by (
- 20 Nov 2024 4:25 UTC; 15 points) 's comment on Making a conservative case for alignment by (
- Making it harder for an AGI to “trick” us, with STVs by 9 Jul 2022 14:42 UTC; 15 points) (
- Learning Normativity: Language by 5 Feb 2021 22:26 UTC; 14 points) (
- Getting from an unaligned AGI to an aligned AGI? by 21 Jun 2022 12:36 UTC; 13 points) (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part1) by 20 Feb 2013 9:09 UTC; 13 points) (
- The Useful Definition of “I” by 28 May 2014 11:44 UTC; 13 points) (
- 24 Jun 2012 7:10 UTC; 12 points) 's comment on Local Ordinances of Fun by (
- 15 May 2017 22:30 UTC; 11 points) 's comment on Gears in understanding by (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part2and1) by 22 Feb 2013 1:09 UTC; 10 points) (
- 27 Aug 2021 3:24 UTC; 10 points) 's comment on Rob B’s Shortform Feed by (
- The many types of blog posts by 26 Nov 2022 3:57 UTC; 10 points) (
- 9 Apr 2021 15:57 UTC; 10 points) 's comment on Testing The Natural Abstraction Hypothesis: Project Intro by (
- Common Uses of “Acceptance” by 26 Jul 2024 11:18 UTC; 9 points) (
- What Is the Idea Behind (Un-)Supervised Learning and Reinforcement Learning? by 30 Sep 2022 16:48 UTC; 9 points) (
- Rationality Reading Group: Part N: A Human’s Guide to Words by 18 Nov 2015 23:50 UTC; 9 points) (
- 18 Dec 2020 12:24 UTC; 8 points) 's comment on Ask Rethink Priorities Anything (AMA) by (EA Forum;
- 26 Sep 2012 16:44 UTC; 8 points) 's comment on Diseased thinking: dissolving questions about disease by (
- 20 Jan 2023 0:42 UTC; 8 points) 's comment on The Plan − 2022 Update by (
- 13 Mar 2019 3:05 UTC; 8 points) 's comment on Blegg Mode by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 12 Nov 2011 1:43 UTC; 8 points) 's comment on Transhumanism and Gender Relations by (
- 1 Jul 2022 18:34 UTC; 7 points) 's comment on The Track Record of Futurists Seems … Fine by (
- [Link] Selfhood bias by 16 Jan 2013 16:05 UTC; 7 points) (
- [SEQ RERUN] The Cluster Structure of Thingspace by 12 Jan 2012 8:37 UTC; 7 points) (
- Communicating concepts in value learning by 14 Dec 2015 3:06 UTC; 7 points) (
- 30 Jul 2023 6:58 UTC; 7 points) 's comment on Yes, It’s Subjective, But Why All The Crabs? by (
- 25 Mar 2018 16:04 UTC; 7 points) 's comment on How to talk rationally about cults by (
- 8 Sep 2011 0:24 UTC; 6 points) 's comment on Is That Your True Rejection? by Eliezer Yudkowsky @ Cato Unbound by (
- Where does the phrase “central example” come from? by 12 Mar 2021 5:57 UTC; 6 points) (
- 9 Dec 2016 13:57 UTC; 5 points) 's comment on A Return to Discussion by (
- 16 Nov 2011 15:29 UTC; 5 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 14 Apr 2015 8:16 UTC; 5 points) 's comment on Open Thread, Apr. 13 - Apr. 19, 2015 by (
- 13 Apr 2011 3:04 UTC; 5 points) 's comment on Language, intelligence, rationality by (
- 21 Feb 2022 4:04 UTC; 4 points) 's comment on QNR prospects are important for AI alignment research by (
- 16 Jun 2018 22:45 UTC; 4 points) 's comment on Worrying about the Vase: Whitelisting by (
- 28 Nov 2021 11:46 UTC; 4 points) 's comment on Frame Control by (
- 11 Jun 2020 3:53 UTC; 4 points) 's comment on Public Static: What is Abstraction? by (
- 22 Jan 2011 20:19 UTC; 3 points) 's comment on I Want to Learn About Education by (
- 9 Nov 2011 2:13 UTC; 3 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 12 Apr 2023 0:14 UTC; 3 points) 's comment on Killing Socrates by (
- 17 Aug 2023 4:11 UTC; 3 points) 's comment on Yes, It’s Subjective, But Why All The Crabs? by (
- 28 Dec 2012 20:55 UTC; 3 points) 's comment on Beware Selective Nihilism by (
- 15 Nov 2012 16:20 UTC; 3 points) 's comment on Rationality Quotes November 2012 by (
- 24 May 2011 4:58 UTC; 2 points) 's comment on Conceptual Analysis and Moral Theory by (
- 10 Mar 2021 0:29 UTC; 2 points) 's comment on What I’d change about different philosophy fields by (
- 15 Mar 2019 7:42 UTC; 2 points) 's comment on Blegg Mode by (
- Wittgenstein and Word2vec: Capturing Relational Meaning in Language and Thought by 28 Jul 2024 19:55 UTC; 2 points) (
- 17 Jul 2023 23:21 UTC; 2 points) 's comment on Consciousness as a conflationary alliance term for intrinsically valued internal experiences by (
- 20 Jun 2011 22:00 UTC; 2 points) 's comment on Why No Wireheading? by (
- 6 Mar 2014 11:57 UTC; 2 points) 's comment on Open Thread: March 4 − 10 by (
- 21 May 2020 1:20 UTC; 2 points) 's comment on A Problem With Patternism by (
- 20 Feb 2012 20:31 UTC; 2 points) 's comment on Evaluating Multiple Metrics (where not all are required) by (
- 13 Feb 2024 16:32 UTC; 2 points) 's comment on Dreams of AI alignment: The danger of suggestive names by (
- 1 Apr 2023 11:22 UTC; 2 points) 's comment on My Model of Gender Identity by (
- 3 Sep 2011 20:53 UTC; 1 point) 's comment on Another treatment of Direct Instruction getting more into the technical details of the theory by (
- 19 Jan 2020 2:51 UTC; 1 point) 's comment on Risk and uncertainty: A false dichotomy? by (
- 19 Nov 2009 1:11 UTC; 1 point) 's comment on A Less Wrong singularity article? by (
- 8 Jul 2015 0:23 UTC; 1 point) 's comment on Open Thread, Jul. 6 - Jul. 12, 2015 by (
- 27 May 2015 17:06 UTC; 1 point) 's comment on Open Thread, May 25 - May 31, 2015 by (
- 20 Jan 2011 9:55 UTC; 1 point) 's comment on Theists are wrong; is theism? by (
- 4 Aug 2011 9:08 UTC; 1 point) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 18 Jun 2015 14:10 UTC; 1 point) 's comment on Autism, or early isolation? by (
- 5 May 2015 11:54 UTC; 1 point) 's comment on Open Thread, May 4 - May 10, 2015 by (
- 30 Jan 2011 0:30 UTC; 0 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- 31 May 2015 19:54 UTC; 0 points) 's comment on Understanding Who You Really Are by (
- 10 Feb 2017 21:18 UTC; 0 points) 's comment on The “I Already Get It” Slide by (
- 16 Aug 2013 15:40 UTC; 0 points) 's comment on What Bayesianism taught me by (
- 8 Mar 2017 7:22 UTC; 0 points) 's comment on Am I Really an X? by (
- 12 Feb 2011 22:46 UTC; 0 points) 's comment on Subjective Relativity, Time Dilation and Divergence by (
- 29 Mar 2020 6:39 UTC; 0 points) 's comment on Can crimes be discussed literally? by (
- 3 Jan 2011 4:38 UTC; 0 points) 's comment on In Russian we have the word ‘Mirozdanie’, which means all that exists by (
- 4 Oct 2012 3:24 UTC; 0 points) 's comment on [Poll] Less Wrong and Mainstream Philosophy: How Different are We? by (
- 1 Jun 2013 3:23 UTC; -2 points) 's comment on [LINK] Bets do not (necessarily) reveal beliefs by (
- 8 Sep 2009 16:23 UTC; -2 points) 's comment on The Featherless Biped by (
- 22 Sep 2012 3:11 UTC; -3 points) 's comment on Any existential risk angles to the US presidential election? by (
- CEV Sequence—On What is a Self—Part 1 by 10 Jan 2012 18:13 UTC; -12 points) (
But if a definition works well enough in practice to point out the intended empirical cluster, objecting to it may justly be called “nitpicking”.
You should probably put in a disclaimer excepting mathematics from this—assuming that you agree it should be excepted. (That is, assuming you agree that “Aristotelian” precision—what mathematicians call “rigor”—is appropriate in mathematics.)
“Definition” has a different definition in math.
Mathematics is largely already excepted from the above discussion—this post is talking about empirical clusters only (“When you draw a boundary around a group of extensional points empirically clustered in thingspace”), and mathematics largely operates in a priori truths derived from axioms. For example, no one needs to do a study of triangles to see whether their angle all do, indeed, add up to 180 degrees—when that’s not part of the definition of triangles, it follows from the other definitions and axioms.
What’s interesting about “Thingspace” (I sometimes call it “orderspace”) is that it flattens out all the different combinations of properties into a mutually exclusive space of points. An observable “thing” in the universe can’t be classified in two different points in Thingspace. Yes you can have a range in Thingspace representing your uncertainty about the classification (If you’re a mere mortal you always have this error bar) but the piece-of-universe-order you are trying to classify is in ideal terms only one point in the space.
IMO this could explain the way we deal with causality. Why do we say effects have only one cause? Where does the Principle of Sufficient Reason come from? The universe is not actually quantized in pieces that have isolated effects on each other. However, causes and effects are “things”, they are points in Thingspace and as “things” they actually represent aggregates, bunches of variable values that when recognized as a whole have, by definition, unique cause-effect relationships with other “things”. I see causality as arrows from one area of thing space to another. Some have tried to account for causality with complex Bayesian networks based on graph theory that are hard to compute. But I think applying causality to labeled clusters in Thingspace instead of trying to apply it to entangled real values seems simpler and more accurate. And you can do it at different levels of granularity to account for uncertainty. The space is then most useful classified hierarchically into an ontology. Uncertainty about classification is then represented by using bigger, vaguer, all encompassing clusters or “categories” in the Thingspace and high level of certainty is represented by a specified small area.
I once tried (and pretty much failed) to create a novel machine learning algorithm based on a causality model between hierarchical EM clusters. I’m not sure why it failed. It was simple and beautiful but I had to use greedy approaches to reduce complexity which might have broken my EM-algorithm. Well at least it (just barely) got me a masters degree. I still believe in my approach and I hope someone will figure it out some day. I’ve been reading and questioning the assumptions underlying all of this lately and specially pondering the link between the physical universe and probability theory and I got stuck at the problem of the arrow of time which seems to be the unifying principle but which also seems not that well understood. A well… maybe in another life.
Why would more uncertainty = bigger cluster? Wouldn’t uncertainty be expressed by using smaller clusters? I.e. if you’re uncertain about a cluster you fall-back on a smaller subset of things that you are more certain pertain to that classification?
If we find a category that has a very tight cluster, such that for that category it’s reasonably straightforward to define that cluster, and only a tiny handful of distant outliers that seem to only shakily fit with the rest of the category, than it may be wise in some cases to conciously redefine that category in terms of the explicit definition that represents the tight cluster, and maybe use a different category, or a broader one, to represent or include those outliers.
Psy-Kosh, dangerous heuristic. Isn’t that how the Nazis thought of the Jews? We should look first and foremost at ways things fit into clusters, not ways they don’t—otherwise nine-fingered Fred gets ruled out of being human at an early hurdle. I’m sure you’ll agree Fred fits better into ‘human’ than ‘broad general-human-type’, despite his missing digit.
Ostriches are a long way from that tight, feathery birdy cluster, but we leave them out of ‘general bird-ness’ at our peril. Mr Ostrich scores 84% on birdiness, not 16% on not-birdiness. (He also scores in the high 60s in dinosauriness, but that’s another matter.)
I sense these 6 essays on cognitive semantics are going to bring us back to transhumanism sooner or later. As of right now, whatever the radial distance from the prototype, and except on the Island of Dr Moreau, you are DEFINITELY human or definitely not, definitely a bird or definitely not. Pluto is DEFINITELY a pla...… whoops.
or maybe a more sophisticated view that .
?
What are the dimensions of thingspace?
Are “number of sides”, “IQ”, “age”, and “font” all dimensions?
And what are the points in thingspace? It sounds like they include anything that is somewhat “mother” and anything that is somewhat “robin”. (And I should think thingspace is a point in thingspace too.)
I think this post makes some good points, the main one, for me, being that words are centers of (indefinitely extending) clusters rather than boundaries of sets. But I think the notion of thingspace rests on shaky foundations: it assumes the world is broken down into things and those things have attributes.
We don’t all share the same thingspace do we?
I think thingspace is meant to be an abstraction. It’s just a map to help us think about categorisation of objects.
Thingspace seems rather like cladistics, in which you come up with groups of characteristics and then work out trees of evolutionary descent. Note that this originated in studying the evolution of life on Earth and piecing together the Tree of Life, but is applicable anywhere an evolutionary process can work, e.g. linguistic evolution. Without necessarily going as far as the actual sorting stuff into trees, cladistics may be useful in helping conceptualise thingspace and distance in thingspace.
A thought I recently had: Shouldn’t we be interested in “anti-clusters” too? ie, regions of comparatively low density compared to the surroundings/Patterns of stuff that tends to conspicuously fail to happen compared to what would be otherwise expected.
This essay reminds me of Samuel Delany saying that the word “the” seems like a gray ellipse to him, and each adjective modifies the ellipse.
does thingspace remain static? that is; would definitional/structural changes within the space correspond to a folding or reorienting of the space where the clusters become reorganized?
You could give relatively simple verbal intensional definitions to try and lead someone to the bird cluster, yes. But if you had someone who wasn’t practically accessible through those verbal communications, how would you do it?
You’d have to show extensional examples, positives and negatives, and indicate the value of each example by some clear and consistent signal.
You couldn’t give all possible extensional examples, so you would have to select some. And you couldn’t give them all at once, so you’d have to present them in a particular order.
What is the theory for finding optimized selections and orderings of examples for leading the learner to the cluster? How does that theory extend to the more complicated case where you have to communicate the subtypes within the “bird” cluster?
This is one of the many things that the Theory of Direct Instruction that’s presented in Engelmann and Carnine’s text Theory of Instruction: Principles and Applications addresses. [They call it a “multi-dimensional non-comparative concept” (non-comparative” meaning the value of any example is absolute rather than relative to the last), or “noun” for short.]
And of course, if you had to select and order the presentation of simple verbal definitions/descriptions as examples themselves, the theory would also have application.
Please see here for a clarification of what “someone who wasn’t practically accessible through those verbal communications” means, and a more concrete example of teaching the higher-order class ‘vehicles’ and sub-classes.
Hi there, fairly new here to LW. I’m reading through the sequences in order. went through map and territory and mysterious answers to mysterious questions. Now going through this 37 ways words can be wrong sequence, as its recommended before i delve into reductionism.
Its been said several times that LW tries to cater to a broad audience, but i find myself lost here. I have not extensively studied physics, only having done 1 year of engineering so far, and the physics references here are pretty much unintelligible to me. I don’t know what configuration space is, or quaternary coordinates, or thingspace, or what strings are being referred to. I find myself struggling to grasp this post.
EDIT: I’ve read through this a few times. I still have almost no idea on most of the math, but I’m guessing the “moral” of this post is basically “don’t become overly obsessed with definitions”?
Reading Eliezers quantum physics sequence should help with configuration spaces and thingspaces, probably some other physics references aswell.
It’s not important to your central claim, but this is the strawmanniest thing since Straw Man came to Straw Town.
No; most philosophers today do, I think, believe that the alleged humanity of 9-fingered instances *homo sapiens* is a serious philosophical problem. It comes up in many “intro to philosophy” or “philosophy of science” texts or courses. Post-modernist arguments rely heavily on the belief that any sort of categorization which has any exceptions is completely invalid.
Um. That...?
I guess there was a misformatted link in there or something?
One small (hopefully not too obvious) addition: the cluster-nature of thing-space is dependent on the distance function, and there is no single obviously corrent one. Is a penguin more like an eagle or a salmon? Depends on what you mean by “more like”. It’s perfectly reasonable to say “right now, the most useful concept of ‘more like’ is ‘last common ancestor’ so penguins are more like eagles and ‘birds’ is a cluster’ and then as your needs change to say “right now, the most useful concept of ‘more like’ is similarity of habitat so penguins are more like salmon and ‘sealife’ is a cluster.”
why yes
clusters can overlap, and the word “more like” uses different clusters of clusters depending on context
Before reading this article, I had already been using this visualization technique to think of probability densities. I wonder how common that is? Probably happened because of exposure to statistics.
What I actually thought reading this was: “Frodo is a nine-fingered Hobbit”...
I’m glad to see Eliezer addressed this point. This post doesn’t get across how absolutely critical it is to understand that {categories always have exceptions, and that’s okay}. Understanding this demolishes nearly all Western philosophy since Socrates (who, along with Parmenides, Heraclitus, Pythagoras, and a few others, corrupted Greek “philosophy” from the natural science of Thales and Anaximander, who studied the world to understand it, into a kind of theology, in which one dictates to the world what it must be like).
Many philosophers have recognized that Aristotle’s conception of categories fails; but most still assumed that that’s how categories must work in order to be “real”, and so proving that categories don’t work that way proved that categorizations “aren’t real”. They them became monists, like the Hindus / Buddhists / Parmenides / post-modernists. The way to avoid this is to understand nominalism, which dissolves the philosophical understanding of that quoted word “real”, and which I hope Eliezer has also explained somewhere.
I found some criticism of this post on a RationalWiki talk page.
What do you guys think?
https://rationalwiki.org/wiki/Talk:LessWrong#EA_orgs_praising_AI_pseudoscience_charity._Is_it_useful.3F
I think there’s some validity to this critique. I read The Cluster Structure of Thingspace (TCSOTS) and was asking myself “isn’t this just talking about the problem of classification?” And classification definitely doesn’t require us to treat ‘birdness’ or ‘motherhood’ as a discrete, as if a creature either has it or doesn’t. Classification can be on a spectrum, with a score for ‘birdness’ or ‘motherhood’ that’s a function of many properties.
I welcome (!!) making these concepts more accessible to those who are unfamiliar with them, and for that reason I really enjoyed TCSOTS.But it also seems like there’d also be a lot of utility in then tying these concepts to the fields of math/CS/philosophy that are already addressing these exact questions. These ideas presented in The Cluster of Thingspace are not new; not even a little—so why not use them as a jumping-off-point for the broader literature on these subjects, to show how researchers in the field have approached these issues, and the solutions they’ve managed to come up with?
See: Fuzzy Math, Support Vector Machines, ANNs, Decision Trees, etc.
So: I think posts like this would have a stronger impact if tied into the broader literature that already covers the same subjects. The reader who started the article unfamiliar with the subject would, at the end, have a stronger idea of where the field stands, and they would also be better resourced for further exploring the subject on their own.
Note: this is probably also why most scientific papers start with a discussion of previous related work.
I do agree that a lot of seqeunces pages would benefit a lot from having discussion of previous work or at least stating what these ideas are called in the mainstream, but I feel Yudkowskys neologisms are just… better. Among the examples of similar concepts you mentioned, I definitely felt Yudkowsky was hinting at them with the whole dimensions thing, but I think “thingspace” is still a useful word and not even that complicated; if it was said in a conversation with someone familiar with ANNs I feel they would get what it meant. (Unlike a lot of other Yudkowskisms usually parroted around here, however...)
Most of this just seems to be nitpicking lack of specificity of implicit assumptions which were self-evident (to me), the criticism regarding “blue” pretty much depends on whether the html blue also needs an interpreter(Eg;human brain) to extract the information.
The lack of formality seems (to me as a new user) a repeated criticism of the sequences but, I thought that was also a self-evident assumption (maybe I’m just falling prey to the expecting short inferential distance bias) I think Eliezer has mentioned 16 years ago here:
“This blog is directed at a wider audience at least half the time, according to its policy. I’m not sure how else you think this post should have been written.”
I personally find sequences to be useful aggregator of various ideas I seem to find intriguing at the moment...
Should probably link to Extensions and Intensions; not everyone reads these posts in order.