Thanks very much for your engagement! I did use ChatGPT to help with readability, though I realize it can sometimes oversimplify or pare down novel reasoning in the process. There’s always a tradeoff between clarity and depth when conveying new or complex ideas. There’s a limit to how long a reader will persist without being convinced something is important, and that limit in turn constrains how much complexity we can reliably communicate. Beyond that threshold, the best way to convey a novel concept is to provide enough motivation for people to investigate further on their own.
To expand this “communication threshold,” there are generally two approaches:
Deep Expertise – Gaining enough familiarity with existing frameworks to quickly test how a new approach aligns with established knowledge. However, in highly interdisciplinary fields, it can be particularly challenging to internalize genuinely novel ideas because they may not align neatly with any single existing framework.
Openness to New Possibilities – Shifting from statements like “this is not an established approach” to questions like “what’s new or valuable about this approach?” That reflective stance helps us see beyond existing paradigms. One open question is how AI-based tools like ChatGPT might help lower the barrier to evaluating unorthodox approaches. Particularly when the returns may not be obvious in the short term we tend to focus on. If we generally rely on quick heuristics to judge utility, how do we assess the usefulness of other tools that may be necessary for longer or less familiar timelines?
My approach, which I call “functional modeling,” examines how intelligent systems (human or AI) move through a “conceptual space” and a corresponding “fitness space.” This approach draws on cognitive science, graph theory, knowledge representation, and systems thinking. Although it borrows elements from each field, the combination is quite novel, which naturally leads to more self-citations than usual.
From an openness perspective, the main takeaways I hoped to highlight are:
As more people or AIs participate in solving—or even defining—problems, the space of possible approaches grows non-linearly (combinatorial explosion).
Wherever our capacity to evaluate or validate these approaches doesn’t expand non-linearly, we face a fundamental bottleneck in alignment.
My proposal, “decentralized collective intelligence,” seeks to define the properties needed to overcome this scaling issue.
Several papers (currently under review) present simulations supporting these points. Dismissing them without examination may stem from consensus-based reasoning, which can inadvertently overlook new or unconventional ideas.
I’m not particularly attached to the term “fractal intelligence.” The key insight, from a functional modeling standpoint, is that whenever a new type of generalization is introduced—one that can “span” the conceptual space by potentially connecting any two concepts—problem-solving capacity (or intelligence) can grow exponentially. This capacity is hypothesized to relate to both the volume and density of the conceptual space itself and the volume and density that can be searched per unit time for a solution. An internal semantic representation is one such generalization, and an explicit external semantic representation that can be shared is another.
I argue that every new generalization transforms the conceptual space into a “higher-order” hypergraph. There are many other ways to frame it, but from this functional modeling perspective, there is a fundamental ‘noise limit,’ which reflects our ability to distinguish closely related concepts. This limit restricts group problem-solving but can be mitigated by semantic representations that increase coherence and reduce ambiguity. If AIs develop internal semantic representations in ways humans can’t interpret, they could collaborate at a level of complexity and sophistication that, as their numbers grow, would surpass even the fastest quantum computer’s ability to ensure safety (assuming such a quantum computer ever becomes available). Furthermore, if AIs can develop something like the “semantic backpropagation” that I proposed in the original post, then with such a semantic representation they might be able to achieve a problem-solving ability that increases non-linearly with their number. Recognizing this possibility is crucial when addressing increasingly complex AI safety challenges. To conclude, my questions are: How can the AI alignment community develop methods or frameworks to evaluate novel and potentially fringe approaches more effectively? Is there any validity to my argument that being confined to consensus approaches (particularly where we don’t recognize it) can make AI safety and alignment unsolvable where important problems and/or solutions lie outside that consensus? Are any of the problems I mentioned in this comment (e.g. the lack of a decentralized collective intelligence capable of removing the limits to the problem-solving ability of human groups) outside of the consensus awareness in the AI alignment community? Thank you again for taking the time to engage with these ideas.
Thanks very much for your engagement! I did use ChatGPT to help with readability, though I realize it can sometimes oversimplify or pare down novel reasoning in the process. There’s always a tradeoff between clarity and depth when conveying new or complex ideas. There’s a limit to how long a reader will persist without being convinced something is important, and that limit in turn constrains how much complexity we can reliably communicate. Beyond that threshold, the best way to convey a novel concept is to provide enough motivation for people to investigate further on their own.
To expand this “communication threshold,” there are generally two approaches:
Deep Expertise – Gaining enough familiarity with existing frameworks to quickly test how a new approach aligns with established knowledge. However, in highly interdisciplinary fields, it can be particularly challenging to internalize genuinely novel ideas because they may not align neatly with any single existing framework.
Openness to New Possibilities – Shifting from statements like “this is not an established approach” to questions like “what’s new or valuable about this approach?” That reflective stance helps us see beyond existing paradigms. One open question is how AI-based tools like ChatGPT might help lower the barrier to evaluating unorthodox approaches. Particularly when the returns may not be obvious in the short term we tend to focus on. If we generally rely on quick heuristics to judge utility, how do we assess the usefulness of other tools that may be necessary for longer or less familiar timelines?
My approach, which I call “functional modeling,” examines how intelligent systems (human or AI) move through a “conceptual space” and a corresponding “fitness space.” This approach draws on cognitive science, graph theory, knowledge representation, and systems thinking. Although it borrows elements from each field, the combination is quite novel, which naturally leads to more self-citations than usual.
From an openness perspective, the main takeaways I hoped to highlight are:
As more people or AIs participate in solving—or even defining—problems, the space of possible approaches grows non-linearly (combinatorial explosion).
Wherever our capacity to evaluate or validate these approaches doesn’t expand non-linearly, we face a fundamental bottleneck in alignment.
My proposal, “decentralized collective intelligence,” seeks to define the properties needed to overcome this scaling issue.
Several papers (currently under review) present simulations supporting these points. Dismissing them without examination may stem from consensus-based reasoning, which can inadvertently overlook new or unconventional ideas.
I’m not particularly attached to the term “fractal intelligence.” The key insight, from a functional modeling standpoint, is that whenever a new type of generalization is introduced—one that can “span” the conceptual space by potentially connecting any two concepts—problem-solving capacity (or intelligence) can grow exponentially. This capacity is hypothesized to relate to both the volume and density of the conceptual space itself and the volume and density that can be searched per unit time for a solution. An internal semantic representation is one such generalization, and an explicit external semantic representation that can be shared is another.
I argue that every new generalization transforms the conceptual space into a “higher-order” hypergraph. There are many other ways to frame it, but from this functional modeling perspective, there is a fundamental ‘noise limit,’ which reflects our ability to distinguish closely related concepts. This limit restricts group problem-solving but can be mitigated by semantic representations that increase coherence and reduce ambiguity. If AIs develop internal semantic representations in ways humans can’t interpret, they could collaborate at a level of complexity and sophistication that, as their numbers grow, would surpass even the fastest quantum computer’s ability to ensure safety (assuming such a quantum computer ever becomes available). Furthermore, if AIs can develop something like the “semantic backpropagation” that I proposed in the original post, then with such a semantic representation they might be able to achieve a problem-solving ability that increases non-linearly with their number. Recognizing this possibility is crucial when addressing increasingly complex AI safety challenges. To conclude, my questions are: How can the AI alignment community develop methods or frameworks to evaluate novel and potentially fringe approaches more effectively? Is there any validity to my argument that being confined to consensus approaches (particularly where we don’t recognize it) can make AI safety and alignment unsolvable where important problems and/or solutions lie outside that consensus? Are any of the problems I mentioned in this comment (e.g. the lack of a decentralized collective intelligence capable of removing the limits to the problem-solving ability of human groups) outside of the consensus awareness in the AI alignment community? Thank you again for taking the time to engage with these ideas.