I’m an independent researcher currently working on a sequence of posts about consciousness. You can send me anonymous feedback here: https://www.admonymous.co/rafaelharth. If it’s about a post, you can add [q] or [nq] at the end if you want me to quote or not quote it in the comment section.
Rafael Harth
Yeah, valid correction.
If people downvoted because they thought the argument wasn’t useful, fine—but then why did no one say that? Why not critique the focus or offer a counter? What actually happened was silence, followed by downvotes. That’s not rational filtering. That’s emotional rejection.
Yeah, I do not endorse the reaction. The situation pattern-matches to other cases where someone new writes things that are so confusing and all over the place that making them ditch the community (which is often the result of excessive downvoting) is arguably a good thing. But I don’t think this was the case here. Your essays look to me to be coherent (and also probably correct). I hadn’t seen any of them before this post but I wouldn’t have downvoted. My model is that most people are not super strategic about this kind of thing and just go “talking politics → bad” without really thinking through whether demotivating the author is good in this case.
So if I understand you correctly: you didn’t read the essay, and you’re explaining that other people who also didn’t read the essay dismissed it as “political” because they didn’t read it.
Yes—from looking at it, it seems like it’s something I agree with (or if not, disagree for reasons that I’m almost certain won’t be addressed in the text), so I didn’t see a reason to read. I mean reading is a time investment, you have to give me a reason to invest that time, that’s how it works. But I thought the (lack of) reaction was unjustified, so I wanted to give you a better model of what happened, which also doesn’t take too much time.
Most people say capitalism makes alignment harder. I’m saying it makes alignment structurally impossible.
The point isn’t to attack capitalism. It’s to explain how a system optimised for competition inevitably builds the thing that kills us.
I mean that’s all fine, but those are nuances which only become relevant after people read, so it doesn’t really change the dynamic I’ve outlined. You have to give people a reason to read first, and then put more nuances into the text. Idk if this helps but I’ve learned this lesson the hard way by spending a ridiculous amount of time on a huge post that was almost entirely ignored (this was several years ago).
(It seems like you got some reactions now fwiw, hope this may make you reconsider leaving.)
I think you probably don’t have the right model of what motivated the reception. “AGI will lead to human extinction and will be built because of capitalism” seems to me like a pretty mainstream position on LessWrong. In fact I strongly suspect this is exactly what Eliezer Yudkowsky believes. The extinction part has been well-articulated, and the capitalism part is what I would have assumed is the unspoken background assumption. Like, yeah, if we didn’t have a capitalist system, then the entire point about profit motives, pride, and race dynamics wouldn’t apply. So… yeah, I don’t think this idea is very controversial on LW (reddit is a different story).
I think the reason that your posts got rejected is that the focus doesn’t seem useful. Getting rid of capitalism isn’t tractable, so what is gained by focusing on this part of the causal chain? I think that’s the part your missing. And because this site is very anti-[political content], you need a very good reason to focus on politics. So I’d guess that what happened is that people saw the argument, thought it was political and not-useful, and consequently downvoted.
Sorry, but isn’t this written by an LLM? Especially since milan’s other comments ([1], [2], [3]) are clearly in a different style, the emotional component goes from 9⁄10 to 0⁄10 with no middle ground.
I find this extremely offensive (and I’m kinda hard to offend I think), especially since I’ve ‘cooperated’ with milan’s wish to point to specific sections in the other comment. LLMs in posts is one thing, but in comments, yuck. It’s like, you’re not worthy of me even taking the time to respond to you.
The guidelines don’t differentiate between posts and comments but this violates them regardless (and actually the post does as well) since it very much does have the stereotypical writing style of an AI assistant, and the comment also seems copy-pasted without a human element at all.
A rough guideline is that if you are using AI for writing assistance, you should spend a minimum of 1 minute per 50 words (enough to read the content several times and perform significant edits), you should not include any information that you can’t verify, haven’t verified, or don’t understand, and you should not use the stereotypical writing style of an AI assistant.
The sentence you quoted is a typo, it’s is meant to say that formal languages are extremely impractical.
Here’s one section that strikes me as very bad
At its heart, we face a dilemma that captures the paradox of a universe so intricately composed, so profoundly mesmerizing, that the very medium on which its poem is written—matter itself—appears to have absorbed the essence of the verse it bears. And that poem, unmistakably, is you—or more precisely, every version of you that has ever been, or ever will be.
I know what this is trying to do but invoking mythical language when discussing consciousness is very bad practice since it appeals to an emotional response. Also it’s hard to read.
Similar things are true for lots of other sections here, very unnecessarily poetic language. I guess you can say that this is policing tone, but I think it’s valid to police tone if the tone is manipulative (on top of just making it harder and more time intensive to read.
Since you asked for a section that’s explicitly nonsense rather than just bad, I think this one deserves the label:
We can encode mathematical truths into natural language, yet we cannot fully encode human concepts—such as irony, ambiguity, or emotional nuance—into formal language. Therefore: Natural language is at least as expressive as formal language.
First of all, if you can’t encode something, it could just be that the thing is not well-defined, rather than that the system is insufficiently powerful
Second, the way this is written (unless the claim is further justified elsewhere) implies that the inability to encode human concepts in formal languages is self-evident, presumably because no one has managed it so far. This is completely untrue; formal[^1] languages are extremely impractical, which is why mathematicians don’t write any real proofs in them. If a human concept like irony could be encoded, it would be extremely long and way way beyond the ability of any human to write down. So even if it were theoretically possible, we almost certainly wouldn’t have done it yet, which means that it not having been done yet is negligible evidence of it being impossible.
[1]: typo corrected from “natural”
I agree that this sounds not very valuable; sounds like a repackaging of illusionism without adding anything. I’m surprised about the votes (didn’t vote myself).
The One True Form of Moral Progress (according to me) is using careful philosophical reasoning to figure out what our values should be, what morality consists of, where our current moral beliefs are wrong, or generally, the contents of normativity (what we should and shouldn’t do)
Are you interested in hearing other people’s answers to these questions (if they think they have them)?
I agree with various comments that the post doesn’t represent all the tradeoffs, but I strong-upvoted this because I think the question is legit interesting. It may be that the answer is no for almost everyone, but it’s not obvious.
For those who work on Windows, a nice little quality of life improvement for me was just to hide desktop icons and do everything by searching in the task bar. (Would be even better if the search function wasn’t so odd.) Been doing this for about two years and like it much more.
Maybe for others, using the desktop is actually worth it, but for me, it was always cluttering up over time, and the annoyance over it not looking the way I want always outweighed the benefits. It really takes barely longer to go CTRL+ESC+”firef”+ENTER than to double click an icon.
I don’t think I get it. If I read this graph correctly, it seems to say that if you let a human play chess against an engine and want it to achieve equal performance, then the amount of time the human needs to think grows exponentially (as the engine gets stronger). This doesn’t make sense if extrapolated downward, but upward it’s about what I would expect. You can compensate for skill by applying more brute force, but it becomes exponentially costly, which fits the exponential graph.
It’s probably not perfect—I’d worry a lot about strategic mistakes in the opening—but it seems pretty good. So I don’t get how this is an argument against the metric.
Not answerable because METR is a flawed measure, imho.
Should I not have began by talking about background information & explaining my beliefs? Should I have the audience had contextual awareness and gone right into talking about solutions? Or was the problem more along the lines of writing quality, tone, or style?
What type of post do you like reading?
Would it be alright if I asked for an example so that I could read it?
This is a completely wrong way to think about it, imo. A post isn’t this thing with inherent terminal value that you can optimize for regardless of content.
If you think you have an insight that the remaining LW community doesn’t have, then and only then[1] should you consider writing a post. Then the questions become is the insight actually valid, and did I communicate it properly. And yes, the second one is huge topic—so if in fact you have something value to say, then sure you can spend a lot of time trying to figure out how to do that, and what e.g. Lsuser said is fine advise. But first you need to actually have something valuable to say. If you don’t, then the only good action is to not write a post. Starting off by just wanting to write something is bound to be not-fruitful.
- ↩︎
yes technically there can be other goals of a post (like if it’s fiction), but this is the central case
I really don’t think this is a reasonable measure for ability to do long term tasks, but I don’t have the time or energy to fight this battle, so I’ll just register my prediction that this paper is not going to age well.
To I guess offer another data point, I’ve had an obsessive nail-removing[1] habit for about 20 years. I concur that it can happen unconsciously; however noticing it seems to me like 10-20% of the problem; the remaining 80-90% is resisting the urge to follow the habit when you do notice. (As for enjoying it, I think technically yeah but it’s for such a short amount of time that it’s never worth it. Maybe if you just gave in and were constantly biting instead of trying to resist for as long as possible, it’d be different.) I also think I’ve solved the noticing part without really applying any specific technique.
But I don’t think this means the post can’t still be valuable for cases where noticing is the primary obstacle.
- ↩︎
I’m not calling it nail-biting bc it’s not about the biting itself, I can equally remove them with my other fingernails.
- ↩︎
Oh, nice! The fact that you didn’t make the time explicit in the post made me suspect that it was probably much shorter. But yeah, six months is long enough, imo.
I would highly caution declaring victory too early. I don’t know for how long you think you’ve overcome the habit, but unless it’s at least three months, I think you’re being premature.
A larger number of people, I think, desperately desperately want LLMs to be a smaller deal than what they are.
Can confirm that I’m one of these people (and yes, I worry a lot about this clouding my judgment).
Again, those are theories of consciousness, not definitions of consciousness.
I would agree that people who use consciousness to denote the computational process vs. the fundamental aspect generally have different theories of consciousness, but they’re also using the term to denote two different things.
(I think this is bc consciousness notably different from other phenomena—e.g., fiber decreasing risk of heart disease—where the phenomenon is relatively uncontroversial and only the theory about how the phenomenon is explained is up for debate. With consciousness, there are a bunch of “problems” about which people debate whether they’re even real problems at all (e.g., binding problem, hard problem). Those kinds of disagreements are likely causally upstream of inconsistent terminology.)
I might be misunderstanding how this works, but I don’t think I’m gonna win the virtue of The Void anytime soon. Or at all.