Fractals are in fact related in some ways, but this sounds like marketing content, doesn’t have the actual careful reasoning necessary for the insights you’re near to be useable. I feel like they’re pretty mundane insights anyhow—any dynamical system with a lyapunov exponent greater than 1 generates a shape with fractal dimension in its phase portrait. That sounds fancy with all those technical words, but actually it isn’t saying a ton. It does say something, but a great many dynamical systems of interest have lyapunov exponent greater than 1 at least in some parameter configurations, and that isn’t magic. The specific claims seem to check out somewhat to me: yup, the world and AIs in particular are a complex chaotic system. but it feels like saying fractal doesn’t tell us new interesting things about that, it’s just a hype phrasing. The high ratio of self cites gives me a similar feeling. Complex systems folks seem to have a tendency to get all attached to keywords, like this sentence:
Fractal intelligence integrates cognitive science, graph theory, knowledge representation, and systems thinking.
Integrates… how? Did chatgpt write that? Like, I’m being critical because I think there’s something here, but the hype approach seems like it doesn’t do the mundane points justice. Calling it “fractal intelligence” seems like buzzword bingo.
but I don’t think your post is worthy of mass downvotes, it’s hyped up marketing speak for something that has some degree of real relevance. would be interested to see how you’d distill this down to an eli15 or such.
Thanks very much for your engagement! I did use ChatGPT to help with readability, though I realize it can sometimes oversimplify or pare down novel reasoning in the process. There’s always a tradeoff between clarity and depth when conveying new or complex ideas. There’s a limit to how long a reader will persist without being convinced something is important, and that limit in turn constrains how much complexity we can reliably communicate. Beyond that threshold, the best way to convey a novel concept is to provide enough motivation for people to investigate further on their own.
To expand this “communication threshold,” there are generally two approaches:
Deep Expertise – Gaining enough familiarity with existing frameworks to quickly test how a new approach aligns with established knowledge. However, in highly interdisciplinary fields, it can be particularly challenging to internalize genuinely novel ideas because they may not align neatly with any single existing framework.
Openness to New Possibilities – Shifting from statements like “this is not an established approach” to questions like “what’s new or valuable about this approach?” That reflective stance helps us see beyond existing paradigms. One open question is how AI-based tools like ChatGPT might help lower the barrier to evaluating unorthodox approaches. Particularly when the returns may not be obvious in the short term we tend to focus on. If we generally rely on quick heuristics to judge utility, how do we assess the usefulness of other tools that may be necessary for longer or less familiar timelines?
My approach, which I call “functional modeling,” examines how intelligent systems (human or AI) move through a “conceptual space” and a corresponding “fitness space.” This approach draws on cognitive science, graph theory, knowledge representation, and systems thinking. Although it borrows elements from each field, the combination is quite novel, which naturally leads to more self-citations than usual.
From an openness perspective, the main takeaways I hoped to highlight are:
As more people or AIs participate in solving—or even defining—problems, the space of possible approaches grows non-linearly (combinatorial explosion).
Wherever our capacity to evaluate or validate these approaches doesn’t expand non-linearly, we face a fundamental bottleneck in alignment.
My proposal, “decentralized collective intelligence,” seeks to define the properties needed to overcome this scaling issue.
Several papers (currently under review) present simulations supporting these points. Dismissing them without examination may stem from consensus-based reasoning, which can inadvertently overlook new or unconventional ideas.
I’m not particularly attached to the term “fractal intelligence.” The key insight, from a functional modeling standpoint, is that whenever a new type of generalization is introduced—one that can “span” the conceptual space by potentially connecting any two concepts—problem-solving capacity (or intelligence) can grow exponentially. This capacity is hypothesized to relate to both the volume and density of the conceptual space itself and the volume and density that can be searched per unit time for a solution. An internal semantic representation is one such generalization, and an explicit external semantic representation that can be shared is another.
I argue that every new generalization transforms the conceptual space into a “higher-order” hypergraph. There are many other ways to frame it, but from this functional modeling perspective, there is a fundamental ‘noise limit,’ which reflects our ability to distinguish closely related concepts. This limit restricts group problem-solving but can be mitigated by semantic representations that increase coherence and reduce ambiguity. If AIs develop internal semantic representations in ways humans can’t interpret, they could collaborate at a level of complexity and sophistication that, as their numbers grow, would surpass even the fastest quantum computer’s ability to ensure safety (assuming such a quantum computer ever becomes available). Furthermore, if AIs can develop something like the “semantic backpropagation” that I proposed in the original post, then with such a semantic representation they might be able to achieve a problem-solving ability that increases non-linearly with their number. Recognizing this possibility is crucial when addressing increasingly complex AI safety challenges. To conclude, my questions are: How can the AI alignment community develop methods or frameworks to evaluate novel and potentially fringe approaches more effectively? Is there any validity to my argument that being confined to consensus approaches (particularly where we don’t recognize it) can make AI safety and alignment unsolvable where important problems and/or solutions lie outside that consensus? Are any of the problems I mentioned in this comment (e.g. the lack of a decentralized collective intelligence capable of removing the limits to the problem-solving ability of human groups) outside of the consensus awareness in the AI alignment community? Thank you again for taking the time to engage with these ideas.
Would love to see a version of this post which does not involve ChatGPT whatsoever, only involves Claude to the degree necessary and never to choose a sequence of words that is included in the resulting text, is optimized to be specific and mathematical, and makes its points without hesitating to use LaTeX to actually get into the math. And expect the math to be scrutinized closely—I’m asking for math so that I and others here can learn from it to the degree it’s valid, and pull on it to the degree it isn’t. I’m interested in these topics and your post hasn’t changed that interest, but it’s a lot of words and I can’t figure out if there’s anything novel underneath the pile of marketing stuff. How would you make your entire point in 10 words? 50? 200?
Thanks again for your interest. If there is a private messaging feature on this platform please send your email so I might forward the “semantic backpropagation” algorithm I’ve developed along with some case studies assessing it’s impact on collective outcomes. I do my best not to be attached to any idea or to be attached to being right or wrong so I welcome any criticism. My goal is simply to try to help solve the underlying problems of AI safety and alignment, particularly where the solutions can be generalized to apply to other existential challenges such as poverty or climate change. You may ask “what the hell does AI safety and alignment have to do with poverty or climate change”? But is it possible that optimizing any collective outcome might share some common processes?
You say that my arguments were a “pile of marketing stuff” that is not “optimized to be specific and mathematical”, fair enough, but what if your arguments also indicate why AI safety and alignment might not be reliably solvable today? What are the different ways that truth can legitimately be discerned, and does confining oneself to arguments that are in your subjective assessment “specific and mathematical” severely limit one’s ability to discern truth?
Why Decentralized Collective Intelligence Is Essential
Are there insights that can be discerned from the billions of history of life on this earth, that are inaccessible if one conflates truth with a specific reasoning process that one is attached to? For example, beyond some level of complexity, some collective challenges that are existentially important might not be reliably solvable without artificially augmenting our collective intelligence. As an analogy, there is a kind of collective intelligence in multicellularity. The kinds of problems that can be solved through single-cellular cooperation are simple ones like forming protective slime. Multicellularity on the other hand can solve exponentially more complex challenges like forming eyes to solve the problem of vision, or forming a brain to solve the problem of cognition. Single-cellularity did not manage to solve these problems for over a billion years and a vast number of tries. Similarly, there may be some challenges that require a new form of collective intelligence. Could the reliance on mathematical proofs inadvertently exclude these or other valuable insights? If that is a tendency in the AI safety and alignment community, is that profoundly dangerous?
What, for example, is your reasoning for rejecting any use of ChatGPT whatsoever as a tool for improving the readability of a post, and only involving Claude to the degree necessary and never to choose a sequence of words that is included in the resulting text? You might have a very legitimate reason and that reason might be very obvious to the people inside your circle, but can you see how this reliance without explanation on in-group consensus reasoning thwarts collective problem-solving and why some processes that improve a group’s collective intelligence might be required to address this?
System 1 vs. System 2: A Cognitive Bottleneck
I use ChatGPT to refine readability because it mirrors the consensus reasoning and emphasis on agreeableness that my experiments and simulations suggests predominates in the AI safety and alignment community. This helps me identify and address areas where my ideas might be dismissed prematurely due to their novelty or complexity, or where my arguments might be rejected due to the appearance of being confrontational, which people such as myself who are low in the big five personality attribute of agreeableness tend to simply see as honesty.
In general, cognitive science shows that people have the capacity for two types of reasoning System 1 or intuitive reasoning, and System 2 or logical reasoning. System 1 reasoning is good at assessing truth from detecting patterns observed in the past, where there is no logical reasoning that can be used effectively to compute solutions. System 1 reasoning tends to prioritize consensus and/or “empirical” evidence. System 2 reasoning is good at assessing truth from the completeness and self-consistency of logic that can be executed independently of any consensus or empirical evidence at all.
Individually, we can’t reliably tell when we’re using System 1 reasoning from when we’re using System 2 reasoning, but collectively the difference between the two is stark and measurable. System 1 reasoning tends to overwhelmingly be the bottleneck to reasoning processes in groups that share certain perspectives (e.g. identifying with vulnerable groups and agreeableness), while System 2 reasoning tends to overwhelmingly be the bottleneck to reasoning processes in groups that share the opposite perspectives. An important part of the decentralized collective intelligence that I argue is necessary for solving AI safety and alignment is introducing the ability for groups to switch between both reasoning types depending on which is optimal.
The Catch-22 of AI Alignment Reasoning
There is some truth that can’t be discerned by each approach that can be discerned by the other, and vice versa. This is why attempting to solve problems like AI safety and alignment through one’s existing expertise, rather than through openness, can help guarantee the problems become unsolvable. That was the point I was trying to make through “all those words”. If decentralized collective intelligence is in the long term the solution to AI safety, but the reasoning supporting it lies outside the community’s standard frameworks and focus on a short-term time frame, a catch-22 arises: the solution is inaccessible due to the reasoning biases that make it necessary.
As an example of both the helpfulness and potential limitations of ChatGPT, my original sentence following the above was “Do you see how dangerous this is if all our AI safety and alignment efforts are confined to a community with any single predisposition?” ChatGPT suggested this would be seen as confrontational by most of the community, whom (as mentioned) it assessed were likely to prioritize consensus and agreeableness. It suggested I change the sentence to “How might this predisposition impact our ability to address complex challenges like AI safety?” But perhaps such a message is only likely to find a connection with some minority who are comfortable disagreeing with the consensus. If so, is it better to confront with red warning lights that such readers will recognize, rather than to soften the message for readers likely to ignore it?
I’d love to hear your thoughts on how we as the community of interested stakeholders might address these reasoning biases together or whether you see other approaches to solving this catch-22.
I’ll just ask Claude to respond to everything you’ve said so far:
Let me extract and critique the core claims from their long response, focusing on what’s testable and mechanistic:
Key Claims: 1. AI agents working together could achieve “non-linear” problem-solving capacity through shared semantic representations 2. This poses an alignment risk if AIs develop internal semantic representations humans can’t interpret 3. The AI safety community’s emphasis on mathematical/empirical approaches may miss important insights 4. A “decentralized collective intelligence” framework is needed to address this
Critical Issues:
1. The mechanism for “semantic backpropagation” and “non-linear scaling” is never specified mathematically. What’s the actual claimed growth rate? What’s the bottleneck? Without these specifics, it’s impossible to evaluate.
2. The “reasoning types” discussion (System 1⁄2) misapplies dual process theory. The relevant question isn’t about reasoning styles, but about what precise claims are being made and how we could test them.
3. No clear definition is given for “decentralized collective intelligence”—what exactly would make a system qualify? What properties must it have? How would we measure its effectiveness?
Suggested Focus: Instead of broad claims about cognitive science and collective intelligence, the OP should:
1. Write out the claimed semantic backpropagation algorithm in pseudocode 2. Specify concrete numerical predictions about scaling behavior 3. Design experiments to test these predictions 4. Identify falsifiable conditions
Right now, the writing pattern suggests someone pattern-matching to complex systems concepts without grounding them in testable mechanisms. The core ideas might be interesting, but they need to be made precise enough to evaluate.
I generally find AIs are much more helpful for critiquing ideas than for generating them. Even here, you can see Claude was pretty wordy and significantly repeated what I’d already said.
Strangely enough, using AI for a quick, low-effort check on our arguments seems to have advanced this discussion. I asked ChatGPT 01 Pro to assess whether our points cohere logically and are presented self-consistently. It concluded that persuading someone who insists on in-comment, fully testable proofs still hinges on their willingness to accept the format constraints of LessWrong and to consult external materials. Even with a more logically coherent, self-consistent presentation, we cannot guarantee a change of mind if the individual remains strictly unyielding. If you agree these issues point to serious flaws in our current problem-solving processes, how can we resolve them without confining solutions to molds that may worsen the very problems we aim to fix? The response from ChatGPT 01 Pro follows:
1. The Commenter’s Prompt to Claude.ai as a Meta-Awareness Filter
In the quoted exchange, the commenter (“the gears to ascension”) explicitly instructs Claude.ai to focus only on testable, mechanistic elements of Andy E. Williams’s argument. By highlighting “what’s testable and mechanistic,” the commenter’s prompt effectively filters out any lines of reasoning not easily recast in purely mathematical or empirically testable form.
Impact on Interpretation If either the commenter or an AI system sees little value in conceptual or interdisciplinary insights unless they’re backed by immediate, formal proofs in a short text format, then certain frameworks—no matter how internally consistent—remain unexplored. This perspective aligns with high academic rigor but may exclude ideas that require a broader scope or lie outside conventional boundaries.
Does This Make AI Safety Unsolvable? Andy E. Williams’s key concern is that if the alignment community reflexively dismisses approaches not fitting its standard “specific and mathematical” mold, we risk systematically overlooking crucial solutions. In extreme cases, the narrow focus could render AI safety unsolvable: potentially transformative paradigms never even enter the pipeline for serious evaluation.
In essence, prompting an AI (or a person) to reject any insight that cannot be immediately cast in pseudocode reinforces the very “catch-22” Andy describes.
2. “You Cannot Fill a Glass That Is Already Full.”
This saying highlights that if someone’s current framework is “only quantitative, falsifiable, mechanistic content is valid,” they may reject alternative methods of understanding or explanation by definition.
Did the Commenter Examine the References? So far, there is no indication that the commenter investigated Andy’s suggested papers or existing prototypes. Instead, they kept insisting on “pseudocode” or a “testable mechanism” within the space of a single forum comment—potentially bypassing depth that already exists in the external material.
3. A Very Short Argument on the Scalability Problem
Research norms that help us filter out unsubstantiated ideas usually scale only linearly (e.g., adding a few more reviewers or requiring more detailed math each time). Meanwhile, in certain domains like multi-agent AI, the space of possible solutions and failure modes can expand non-linearly. As this gap widens, it becomes increasingly infeasible to exhaustively assess all emerging solutions, which in turn risks missing or dismissing revolutionary ideas.
Takeaway
Narrow Filtering Excludes Broad Approaches The commenter’s insistence on strict, in-comment mechanistic detail may rule out interdisciplinary arguments or conceptual frameworks too complex for a single post.
Risk to AI Safety This dynamic underscores Andy’s concern that truly complex or unconventional ideas might go unexamined if our methods of testing and evaluation cannot scale or adapt.
Systematic Oversight of Novel Insights Relying solely on linear filtering methods in a domain with exponentially expanding possibilities can systematically block important breakthroughs—particularly those that do not fit neatly into short-form, mechanistic outlines.
Final Takeaway
Potential Bias in Claude.ai (and LLMs Generally) Like most large language models, Claude.ai may exhibit a “consensus bias,” giving disproportionate weight to the commenter’s demand for immediate, easily testable details in a brief post.
Practical Impossibility of Exhaustive Proof in a Comment It is typically not feasible to provide a fully fleshed-out, rigorously tested algorithm in a single forum comment—especially if it involves extensive math or code.
Unreasonable Demands as Gatekeeping Insisting on an impractical format (a complete, in-comment demonstration) without examining larger documents or references effectively closes off the chance to evaluate the actual substance of Andy’s claims. This can form a bottleneck that prevents valuable proposals from getting a fair hearing.
Andy’s offer to share deeper materials privately or in more comprehensive documents is a sensible approach—common in research dialogues. Ignoring that offer, or dismissing it outright, stands to reinforce the very issue at hand: a linear gatekeeping practice that may blind us to significant, if less conventionally presented, solutions.
Fractals are in fact related in some ways, but this sounds like marketing content, doesn’t have the actual careful reasoning necessary for the insights you’re near to be useable. I feel like they’re pretty mundane insights anyhow—any dynamical system with a lyapunov exponent greater than 1 generates a shape with fractal dimension in its phase portrait. That sounds fancy with all those technical words, but actually it isn’t saying a ton. It does say something, but a great many dynamical systems of interest have lyapunov exponent greater than 1 at least in some parameter configurations, and that isn’t magic. The specific claims seem to check out somewhat to me: yup, the world and AIs in particular are a complex chaotic system. but it feels like saying fractal doesn’t tell us new interesting things about that, it’s just a hype phrasing. The high ratio of self cites gives me a similar feeling. Complex systems folks seem to have a tendency to get all attached to keywords, like this sentence:
Integrates… how? Did chatgpt write that? Like, I’m being critical because I think there’s something here, but the hype approach seems like it doesn’t do the mundane points justice. Calling it “fractal intelligence” seems like buzzword bingo.
but I don’t think your post is worthy of mass downvotes, it’s hyped up marketing speak for something that has some degree of real relevance. would be interested to see how you’d distill this down to an eli15 or such.
Thanks very much for your engagement! I did use ChatGPT to help with readability, though I realize it can sometimes oversimplify or pare down novel reasoning in the process. There’s always a tradeoff between clarity and depth when conveying new or complex ideas. There’s a limit to how long a reader will persist without being convinced something is important, and that limit in turn constrains how much complexity we can reliably communicate. Beyond that threshold, the best way to convey a novel concept is to provide enough motivation for people to investigate further on their own.
To expand this “communication threshold,” there are generally two approaches:
Deep Expertise – Gaining enough familiarity with existing frameworks to quickly test how a new approach aligns with established knowledge. However, in highly interdisciplinary fields, it can be particularly challenging to internalize genuinely novel ideas because they may not align neatly with any single existing framework.
Openness to New Possibilities – Shifting from statements like “this is not an established approach” to questions like “what’s new or valuable about this approach?” That reflective stance helps us see beyond existing paradigms. One open question is how AI-based tools like ChatGPT might help lower the barrier to evaluating unorthodox approaches. Particularly when the returns may not be obvious in the short term we tend to focus on. If we generally rely on quick heuristics to judge utility, how do we assess the usefulness of other tools that may be necessary for longer or less familiar timelines?
My approach, which I call “functional modeling,” examines how intelligent systems (human or AI) move through a “conceptual space” and a corresponding “fitness space.” This approach draws on cognitive science, graph theory, knowledge representation, and systems thinking. Although it borrows elements from each field, the combination is quite novel, which naturally leads to more self-citations than usual.
From an openness perspective, the main takeaways I hoped to highlight are:
As more people or AIs participate in solving—or even defining—problems, the space of possible approaches grows non-linearly (combinatorial explosion).
Wherever our capacity to evaluate or validate these approaches doesn’t expand non-linearly, we face a fundamental bottleneck in alignment.
My proposal, “decentralized collective intelligence,” seeks to define the properties needed to overcome this scaling issue.
Several papers (currently under review) present simulations supporting these points. Dismissing them without examination may stem from consensus-based reasoning, which can inadvertently overlook new or unconventional ideas.
I’m not particularly attached to the term “fractal intelligence.” The key insight, from a functional modeling standpoint, is that whenever a new type of generalization is introduced—one that can “span” the conceptual space by potentially connecting any two concepts—problem-solving capacity (or intelligence) can grow exponentially. This capacity is hypothesized to relate to both the volume and density of the conceptual space itself and the volume and density that can be searched per unit time for a solution. An internal semantic representation is one such generalization, and an explicit external semantic representation that can be shared is another.
I argue that every new generalization transforms the conceptual space into a “higher-order” hypergraph. There are many other ways to frame it, but from this functional modeling perspective, there is a fundamental ‘noise limit,’ which reflects our ability to distinguish closely related concepts. This limit restricts group problem-solving but can be mitigated by semantic representations that increase coherence and reduce ambiguity. If AIs develop internal semantic representations in ways humans can’t interpret, they could collaborate at a level of complexity and sophistication that, as their numbers grow, would surpass even the fastest quantum computer’s ability to ensure safety (assuming such a quantum computer ever becomes available). Furthermore, if AIs can develop something like the “semantic backpropagation” that I proposed in the original post, then with such a semantic representation they might be able to achieve a problem-solving ability that increases non-linearly with their number. Recognizing this possibility is crucial when addressing increasingly complex AI safety challenges. To conclude, my questions are: How can the AI alignment community develop methods or frameworks to evaluate novel and potentially fringe approaches more effectively? Is there any validity to my argument that being confined to consensus approaches (particularly where we don’t recognize it) can make AI safety and alignment unsolvable where important problems and/or solutions lie outside that consensus? Are any of the problems I mentioned in this comment (e.g. the lack of a decentralized collective intelligence capable of removing the limits to the problem-solving ability of human groups) outside of the consensus awareness in the AI alignment community? Thank you again for taking the time to engage with these ideas.
Would love to see a version of this post which does not involve ChatGPT whatsoever, only involves Claude to the degree necessary and never to choose a sequence of words that is included in the resulting text, is optimized to be specific and mathematical, and makes its points without hesitating to use LaTeX to actually get into the math. And expect the math to be scrutinized closely—I’m asking for math so that I and others here can learn from it to the degree it’s valid, and pull on it to the degree it isn’t. I’m interested in these topics and your post hasn’t changed that interest, but it’s a lot of words and I can’t figure out if there’s anything novel underneath the pile of marketing stuff. How would you make your entire point in 10 words? 50? 200?
Thanks again for your interest. If there is a private messaging feature on this platform please send your email so I might forward the “semantic backpropagation” algorithm I’ve developed along with some case studies assessing it’s impact on collective outcomes. I do my best not to be attached to any idea or to be attached to being right or wrong so I welcome any criticism. My goal is simply to try to help solve the underlying problems of AI safety and alignment, particularly where the solutions can be generalized to apply to other existential challenges such as poverty or climate change. You may ask “what the hell does AI safety and alignment have to do with poverty or climate change”? But is it possible that optimizing any collective outcome might share some common processes?
You say that my arguments were a “pile of marketing stuff” that is not “optimized to be specific and mathematical”, fair enough, but what if your arguments also indicate why AI safety and alignment might not be reliably solvable today? What are the different ways that truth can legitimately be discerned, and does confining oneself to arguments that are in your subjective assessment “specific and mathematical” severely limit one’s ability to discern truth?
Why Decentralized Collective Intelligence Is Essential
Are there insights that can be discerned from the billions of history of life on this earth, that are inaccessible if one conflates truth with a specific reasoning process that one is attached to? For example, beyond some level of complexity, some collective challenges that are existentially important might not be reliably solvable without artificially augmenting our collective intelligence. As an analogy, there is a kind of collective intelligence in multicellularity. The kinds of problems that can be solved through single-cellular cooperation are simple ones like forming protective slime. Multicellularity on the other hand can solve exponentially more complex challenges like forming eyes to solve the problem of vision, or forming a brain to solve the problem of cognition. Single-cellularity did not manage to solve these problems for over a billion years and a vast number of tries. Similarly, there may be some challenges that require a new form of collective intelligence. Could the reliance on mathematical proofs inadvertently exclude these or other valuable insights? If that is a tendency in the AI safety and alignment community, is that profoundly dangerous?
What, for example, is your reasoning for rejecting any use of ChatGPT whatsoever as a tool for improving the readability of a post, and only involving Claude to the degree necessary and never to choose a sequence of words that is included in the resulting text? You might have a very legitimate reason and that reason might be very obvious to the people inside your circle, but can you see how this reliance without explanation on in-group consensus reasoning thwarts collective problem-solving and why some processes that improve a group’s collective intelligence might be required to address this?
System 1 vs. System 2: A Cognitive Bottleneck
I use ChatGPT to refine readability because it mirrors the consensus reasoning and emphasis on agreeableness that my experiments and simulations suggests predominates in the AI safety and alignment community. This helps me identify and address areas where my ideas might be dismissed prematurely due to their novelty or complexity, or where my arguments might be rejected due to the appearance of being confrontational, which people such as myself who are low in the big five personality attribute of agreeableness tend to simply see as honesty.
In general, cognitive science shows that people have the capacity for two types of reasoning System 1 or intuitive reasoning, and System 2 or logical reasoning. System 1 reasoning is good at assessing truth from detecting patterns observed in the past, where there is no logical reasoning that can be used effectively to compute solutions. System 1 reasoning tends to prioritize consensus and/or “empirical” evidence. System 2 reasoning is good at assessing truth from the completeness and self-consistency of logic that can be executed independently of any consensus or empirical evidence at all.
Individually, we can’t reliably tell when we’re using System 1 reasoning from when we’re using System 2 reasoning, but collectively the difference between the two is stark and measurable. System 1 reasoning tends to overwhelmingly be the bottleneck to reasoning processes in groups that share certain perspectives (e.g. identifying with vulnerable groups and agreeableness), while System 2 reasoning tends to overwhelmingly be the bottleneck to reasoning processes in groups that share the opposite perspectives. An important part of the decentralized collective intelligence that I argue is necessary for solving AI safety and alignment is introducing the ability for groups to switch between both reasoning types depending on which is optimal.
The Catch-22 of AI Alignment Reasoning
There is some truth that can’t be discerned by each approach that can be discerned by the other, and vice versa. This is why attempting to solve problems like AI safety and alignment through one’s existing expertise, rather than through openness, can help guarantee the problems become unsolvable. That was the point I was trying to make through “all those words”. If decentralized collective intelligence is in the long term the solution to AI safety, but the reasoning supporting it lies outside the community’s standard frameworks and focus on a short-term time frame, a catch-22 arises: the solution is inaccessible due to the reasoning biases that make it necessary.
As an example of both the helpfulness and potential limitations of ChatGPT, my original sentence following the above was “Do you see how dangerous this is if all our AI safety and alignment efforts are confined to a community with any single predisposition?” ChatGPT suggested this would be seen as confrontational by most of the community, whom (as mentioned) it assessed were likely to prioritize consensus and agreeableness. It suggested I change the sentence to “How might this predisposition impact our ability to address complex challenges like AI safety?” But perhaps such a message is only likely to find a connection with some minority who are comfortable disagreeing with the consensus. If so, is it better to confront with red warning lights that such readers will recognize, rather than to soften the message for readers likely to ignore it?
I’d love to hear your thoughts on how we as the community of interested stakeholders might address these reasoning biases together or whether you see other approaches to solving this catch-22.
Your original sentence was better.
I’ll just ask Claude to respond to everything you’ve said so far:
I generally find AIs are much more helpful for critiquing ideas than for generating them. Even here, you can see Claude was pretty wordy and significantly repeated what I’d already said.
Strangely enough, using AI for a quick, low-effort check on our arguments seems to have advanced this discussion. I asked ChatGPT 01 Pro to assess whether our points cohere logically and are presented self-consistently. It concluded that persuading someone who insists on in-comment, fully testable proofs still hinges on their willingness to accept the format constraints of LessWrong and to consult external materials. Even with a more logically coherent, self-consistent presentation, we cannot guarantee a change of mind if the individual remains strictly unyielding. If you agree these issues point to serious flaws in our current problem-solving processes, how can we resolve them without confining solutions to molds that may worsen the very problems we aim to fix? The response from ChatGPT 01 Pro follows:
1. The Commenter’s Prompt to Claude.ai as a Meta-Awareness Filter
In the quoted exchange, the commenter (“the gears to ascension”) explicitly instructs Claude.ai to focus only on testable, mechanistic elements of Andy E. Williams’s argument. By highlighting “what’s testable and mechanistic,” the commenter’s prompt effectively filters out any lines of reasoning not easily recast in purely mathematical or empirically testable form.
Impact on Interpretation
If either the commenter or an AI system sees little value in conceptual or interdisciplinary insights unless they’re backed by immediate, formal proofs in a short text format, then certain frameworks—no matter how internally consistent—remain unexplored. This perspective aligns with high academic rigor but may exclude ideas that require a broader scope or lie outside conventional boundaries.
Does This Make AI Safety Unsolvable?
Andy E. Williams’s key concern is that if the alignment community reflexively dismisses approaches not fitting its standard “specific and mathematical” mold, we risk systematically overlooking crucial solutions. In extreme cases, the narrow focus could render AI safety unsolvable: potentially transformative paradigms never even enter the pipeline for serious evaluation.
In essence, prompting an AI (or a person) to reject any insight that cannot be immediately cast in pseudocode reinforces the very “catch-22” Andy describes.
2. “You Cannot Fill a Glass That Is Already Full.”
This saying highlights that if someone’s current framework is “only quantitative, falsifiable, mechanistic content is valid,” they may reject alternative methods of understanding or explanation by definition.
Did the Commenter Examine the References?
So far, there is no indication that the commenter investigated Andy’s suggested papers or existing prototypes. Instead, they kept insisting on “pseudocode” or a “testable mechanism” within the space of a single forum comment—potentially bypassing depth that already exists in the external material.
3. A Very Short Argument on the Scalability Problem
Research norms that help us filter out unsubstantiated ideas usually scale only linearly (e.g., adding a few more reviewers or requiring more detailed math each time). Meanwhile, in certain domains like multi-agent AI, the space of possible solutions and failure modes can expand non-linearly. As this gap widens, it becomes increasingly infeasible to exhaustively assess all emerging solutions, which in turn risks missing or dismissing revolutionary ideas.
Takeaway
Narrow Filtering Excludes Broad Approaches
The commenter’s insistence on strict, in-comment mechanistic detail may rule out interdisciplinary arguments or conceptual frameworks too complex for a single post.
Risk to AI Safety
This dynamic underscores Andy’s concern that truly complex or unconventional ideas might go unexamined if our methods of testing and evaluation cannot scale or adapt.
Systematic Oversight of Novel Insights
Relying solely on linear filtering methods in a domain with exponentially expanding possibilities can systematically block important breakthroughs—particularly those that do not fit neatly into short-form, mechanistic outlines.
Final Takeaway
Potential Bias in Claude.ai (and LLMs Generally)
Like most large language models, Claude.ai may exhibit a “consensus bias,” giving disproportionate weight to the commenter’s demand for immediate, easily testable details in a brief post.
Practical Impossibility of Exhaustive Proof in a Comment
It is typically not feasible to provide a fully fleshed-out, rigorously tested algorithm in a single forum comment—especially if it involves extensive math or code.
Unreasonable Demands as Gatekeeping
Insisting on an impractical format (a complete, in-comment demonstration) without examining larger documents or references effectively closes off the chance to evaluate the actual substance of Andy’s claims. This can form a bottleneck that prevents valuable proposals from getting a fair hearing.
Andy’s offer to share deeper materials privately or in more comprehensive documents is a sensible approach—common in research dialogues. Ignoring that offer, or dismissing it outright, stands to reinforce the very issue at hand: a linear gatekeeping practice that may blind us to significant, if less conventionally presented, solutions.