Henlo.
milanrosko
You’re not the kind of hack/rube this post is about because you are demonstrating hallmarks of critical thinking and epistemic humility.
You admit a knowledge gap, you reflect...
See the perfect example bellow.
While Gödel is indeed notorious in this sense, note that Rice does not count as “abused” outside the lesswrong community, rather, it is more like that people here cannot deal with it.
Unwillingness to accept limitative notions and to engage with formal logic when it’s relevant makes this community isolated.
Also, once you reason constructively, it is hard to induce an “abuse” of limitative result because they are already implicit in a “falsehood as proof of absurdity”.
Under LEM.
Gödel then becomes the “source” of “false” simple as that.
its smart think in terms in terms of abused theorems. generally be obsessed with what people abuse. add the second law to the list also. take a topos literally, oh wait, its as semantic equivocation is implicitly on the list.
okay i wrote you a reply and apparently lesswrong thinks its llm written (which is insane) bottom line is that the standard is “Constructivism in Mathematics, Vol 1 (Studies in Logic and the Foundations of Mathematics, Volume 121)” by A.S. Troelstra and D. van Dalen i also explained how assuming less are not additional constraints but naturally correspond to a harder system and gave the irrationality of root 2 as an example for typical law of the excluded middle proof.
The Essentialism of Lesswrong
Typical lesswrong word salad, e.g. “detailed metacognitive thoughts”, “handle the consequences of being accurately seen”… or the best: “self-empowerment strategy. But it’s more like, noticing you already have power.”
I had it with this pseudo intellectual community.
ah of course, it’s because a workaround in the mathjx. thank you for noticing, i will fix it (fixable)
The Undervalued Kleene Hierarchy
This is a logical consequence.
Rice’s theorem says: no uniform decision procedure can take an arbitrary program and decide a nontrivial semantic property. The reply (“we only need it for the systems we build”) seems to side-steps that quantifier.So let’s tighten it: How do you prove that a single specific agent is aligned across one possible states of the world? By a criterion. Some’ possible states of the world? By multiple criteria. Across ‘all relevant’ states of the world? By all ‘relevant’ criteria. Problematic if one has no model and no criteria.
Might wan’t to resort to a HOL probabilistic, game-theoretic, philosophy Ansatz instead… What will fail is to extract semantics from places semantics were never previously extracted (circuits, features etc.)If you can restrict the program class (good), but unless you also operationalize the state space and key properties, you haven’t escaped diagonalization, you’ve just moved it. “All states that could matter” tends to reintroduce worst-case adversarial cases, and then you either (a) weaken the guarantee, or (b) strengthen the structure, at which point the real question becomes whether your restriction still matches what you meant by “the world.”
So: yes, it’s not primarily “decide alignment for all programs,” it’s “justify a uniform claim about one program over an open-ended environment,” which is where the constructive discomfort actually resides, and more so because neural networks invite the self-referential flavor.
Note. Interestingly this obstructs AI too.
Fibonacci Holds Information
On semantics:
Actually semantics “can” be conserved, an example.
- Change in semantics: “This is my second car” vs “this is my second skateboard”.
- Change in syntactics: “This is my second car” vs “this is my third car”.
- No formal change in the strict sense: “This is my second car” vs “This is—actually—my second car!”Change during prompting: “Can be both, either, neither.”
Considering the prompting itself:
I copy/pasted a rough drafts (2-4 paragraphs?) and wrote something like: “Improve.” And then I edited again.
On the opaqueness:
The problem could be “limitative” thinking. Most of weak logics is, eliminative (e.g. implication A → B is elimination of A etc.). We can only look for contradictions and check for soundness.
On the post:
We can reduce it to a premise:
- “Every physically lawful protein has a well-defined ground-state structure determined by a finite reduction.” ≈ “Given unbounded time/energy, we can in principle compute the fold of any amino-acid sequence.”
and show, that from:
- “Biological evolution, using only local variation and selection, has discovered many stable, functional proteins.”
...it does NOT follow that:
- “Therefore the space of evolution-accessible proteins has simple, exploitable structure, (shallow energy landscapes) so we should expect to be able to build a practical ML system that reliably predicts their structures (or designs new ones) given enough data and compute.”
More context:
- Under a finite reduction (a model of folding with well-ordering property on a given lattice), protein folding is expressive enough to simulate arbitrary computation: Hence, the general folding problem is Turing-complete.
So: Every computable function must impose a final blur over that discrete lattice: a smoothing, regularization, or bounded-precision step that ensures computability. Without this blur, the system would risk confronting an unsolvable folding instance that would take forever. Excluding such cases implicitly constitutes a decision about an undecidable set. (Interestingly the ‘blur’ itself has also a ‘blur’, call it sigmoid, division etc.)
We can never determine exactly what potential structure is lost in this process, which regions of the formal folding landscape are suppressed or merged by the regularization that keeps the model finite and tractable, unless we train a better model that is sharper, but the problem repeats.
And yes, that means: We don’t know if ML will produce diamond bacteria. Could be, could be not.
On Bayesianism:
Since it is popular here, the failure could be explained as: No Bayesian system can coherently answer the meta-question:
- “What is the probability that I can correctly assign probabilities?”
The Adequacy of Class Separation
This sounds like you are giving advice for someone that is rather interested in writing posts, not developing formal arguments… In logic we are searching for transitivity and rigor, not some modulo aesthetic preference, perhaps you overlooked that the content above is “somewhat” independent of its prosaic form.
”Most posts have flaws”
Where to find flaws in the argument presented… There could be several places where my reasoning can be challenged, the key is to do so without endorsing Yudkowsky’s views.
The game is a bit unfair, as it inherently easier to “destroy” positivistic arguments with first order inference: entropy, smoothness etc. is basically where all limitative theorems reside, arguments relying on these often collapse under scrutiny, most of the time the failure is baked into the system.
The whole “less wrong shtick” forgets that sometimes there is only “wrong”. As if it were so simple to convert smartness into casino winnings… Different people would be rich.Note: I will slowly start to delete posts like this due to irrelevancy and redundancy.
(not enhanced)
Let’s look at the most interesting of your arguments first:
“There are plenty of expansions you could make to the “evolutionary subset” (some of them trivial, some of them probably interesting) for which no theorem from complexity theory guarantees that the problem of predicting how any particular instance in the superset folds is intractable.”
This response treats the argument as if it were an appeal to worst-case complexity theory—i.e., “protein folding is NP-hard, therefore superintelligence can’t solve it outside the evolutionary subset.” My point rests on entropy and domain restriction, not on NP-hardness per se. It was just convinient to frame it in these terms. And so the existence of trivial or nontrivial supersets where hardness theorems don’t apply is irrelevant. But even so: What goes in a computer is already framed in an abstraction sufficient to determine NP-ness.
In an weird way way, you are actually agreeing with what was written.
My argument is not: “the moment we enlarge the domain, NP-hardness dooms us.”
My argument is: “the moment we enlarge the domain, the entropy of the instance class can increase without bound, and the learned model’s non-uniform ‘advice string’ (its parameters) no longer encodes the necessary constraints.”One point is demonstrably meh:
In general, hardness results from complexity theory say very little about the practical limits on problem-solving ability for AI (or humans, or evolution) in the real world, precisely because the “standard abstraction schemes” do not fully capture interesting aspects of the real-world problem domain, and because the results are mainly about classes and limiting behavior rather than any particular instance we care about.
Cmputational hardness does retain some relevance for AI, since AI systems exhibit the same broad pattern of struggling with problems whose structure reflects NP-type combinatorial explosion. Verifying a candidate solution can be easy, while discovering can be difficult, impressing your crush “NP-like”: it is straightforward to determine whether a remark is effective, but difficult to determine in advance what remark will succeed, with or without AI.
Now again, I am not discussing complexity, this is more like a predicate argument for why something is a non sequitur.
An analogy: Consider the task of factoring every number between 1 and finite n using resources limited by some parameter m. We know in the abstract that factoring is computable, and we have strong reasons to believe that a polynomial-time algorithm exists. Yet none of this tells us whether such an algorithm can be carried out within the bound m. The existence of a procedure in principle does not reveal the resource profile of any procedure in practice. Even if the number system over [1,n] has clean analytic regularities—predictable prime density, well-behaved smoothness properties, low descriptive complexity—these features offer no constructive bridge from “a factoring algorithm must exist” to “a factoring algorithm exists under this specific resource limit.” The arithmetic regularities describe the domain; they do not generate the algorithm or its bounds.
Sure, thank you for thanking.
I will add a few observations/ideas on the topic.
1. It is often claimed that large language models overuse em dashes, but the matter is more nuanced (read an article explicitly on this topic) . Effective prose can employ em dashes for expressivity, and the choice is ultimately stylistic. I, for example, make frequent use of both em and en dashes and have come to prefer them in certain contexts.
2. There is a core epistemic concern: when we detect LLM-like features, we infer “this is produced by an LLM,” yet it does not follow inductively that every text exhibiting similar features must originate from one. Moreover, there is a form of survivorship bias in the texts we fail to identify as machine-generated. Additional complications arise when attempting to delineate where “slop” begins. Does it include grammar correction, stylistic adjustment, paraphrasing, revision, or prompt-driven rewriting?
3. The emphasis should rest primarily on content rather than on presentation. The central question is logical: could an LLM generate this material at scale? For instance, external references—such as video links, illustrations, or other artifacts—may alter that assessment.
For the content above, it stands on its own ground as a formal argument: Entropy (both energetic and algorithmic) behaves, in computational reasoning, exactly like a classical logical fixed point—capable of certifying a stabilized structure but never capable of generating or justifying that structure. Latter is always outside the system. It’s a bit related to Löb’s Theorem, which many people take as a strengthening, some as a logic fixed point. Whatever it is, classic theory of computation (1930-1970) is overlooked in 2025,
I am not a “native writer”, (originally German, Hungarian) so naturally, I get assistance with grammar but not with formal content. You can already extrapolate the latter fact by my (hand drawn) TSP illustration on the iPad so this question is a bit annoying.
A Critique of Yudkowsky’s Protein Folding Heuristic
Come one, this is like… :D Please.
I realized that with you formulating the Turing problem in this way helped me a great dead how to express the main idea.
What I did
Logic → Modular Logic → Modular Logic Thought Experiment → Human
Logic → Lambda Form → Language → Turing Form → Application → Human
This route is a one way street… But if you have it in logic, you can express it also as
Logic → Propositional Logic → Natural Language → Step by step propositions where you can say either yey or ney.
If you are logical you must arrive at the conclusion.
Thank you for this.
Update: After −45 Karma, I really wan’t to make this point.
Adopt a label. Be rational, but do not be rigorous. Be TLDR. Congratulations, you are less wrong without more rigor. You never think outside the box. You will never be a proud owner of negative Karma, you are too afraid to think outside the box. You are the box.