I think you’re massively overestimating Eliezer Yudkowsky’s intelligence. I would guess it’s somewhere between +2 and +3 SD.
Sweetgum
Who are some other examples?
What did you think of?
Even if it wasn’t meant to be an allegory for race science, I’m pretty sure it was meant to be an allegory for similarly-taboo topics rather than religion. Religious belief just isn’t that taboo.
Hmm, it seems like you might be treating this post as an allegory for religion because of the word “agnostic”, but I’m almost certain that it’s not. I think it’s about “race science”/”human biodiversity”/etc., i.e. the claim “[ethnicity] are genetically predisposed to [negative psychological trait]”.
Before I do that, though, it’s clear that horrible acts have been committed in the name of dragons. Many dragon-believers publicly or privately endorse this reprehensible history. Regardless of whether dragons do in fact exist, repercussions continue to have serious and unfair downstream effects on our society.
While this could work as a statement about religious people, it seems a lot more true for modern racists than modern religious people.
Given that history, the easy thing to do would be to loudly and publicly assert that dragons don’t exist. But while a world in which dragons don’t exist would be preferable, that a claim has inconvenient or harmful consequences isn’t evidence of its truth or falsehood.
This is the type of thing I often see LessWrongers say about race science.
But if I decided to look into it I might instead find myself convinced that dragons do exist. In addition to this being bad news about the world, I would be in an awkward position personally. If I wrote up what I found I would be in some highly unsavory company. Instead of being known as someone who writes about a range of things of varying levels of seriousness and applicability, I would quickly become primarily known as one of those dragon advocates. Given the taboos around dragon-belief, I could face strong professional and social consequences.
Religious belief is not nearly as taboo as what this paragraph describes, but the claim “[ethnicity] are genetically predisposed to [negative psychological trait]” is.
There are more rich people that choose to give up the grind than poor people.
Did you mean to say “There are more poor people that choose to give up the grind than rich people?”
So, according to this estimate, if we could freeze-frame a single moment of our working memory and then explain all of the contents in natural language, it would take about a minute to accomplish.
This seems like a potentially misleading description of the situation. It seems to say that the contents of working memory could always be described in one minute of natural language, but this is not implied (as I’m sure you know based on your reasoning in this post). A 630-digit number cannot be described in one minute of natural language. 2016 bits of memory and about 2016 bits of natural language per minute really means that if our working memory was perfectly optimized for storing natural language and only natural language, it could store about one minute of it.
(And on that note, how much natural language can the best memory athletes store in their working memory? One minute seems low to me. If they can actually store more, it would show that your bit estimate is too low.)
Even assuming perfect selfishness, sometimes the best way to get what you want (X) is to coordinate to change the world in a way that makes X plentiful, rather than fighting over the rare Xs that exist now, and in that way, your goals align with other people who want X.
E.g. learning when you’re rationalizing, when you’re avoiding something, when you’re deluded, [...] when you’re really thinking about something else, etc.
It seems extremely unlikely that these things could be seen in fMRI data.
I think I got it. Right after the person buys X for $1, you offer to buy it off them for $2, but with a delay, so they keep X for another month before the sale goes through. After the month passes, they now value X at $3 so they are willing to pay $3 to buy it back from you, and you end up with +$1.
What happens if the parrots have their own ideas about who to breed with? Or the rejected parrots don’t want to be sterilised?
It’s worth noting that both of these things are basically already true, and don’t require great intelligence.
Autonomous lethal weapons (ALWs; we need a more eerie, memetic name)
There’s already a more eerie, memetic name. Slaughterbots.
Maybe something like “mundane-ist” would be better. The “realists” are people who think that AI is fundamentally “mundane” and that the safety concerns with AI are basically the same as safety concerns with any new technology (increases inequality by making the powerful more powerful, etc.) But of course “mundane-ist” isn’t a real word, which is a bit of a problem.
Can’t tell if sarcastic
Wild speculation ahead: Perhaps the aversion to this sort of rationalization is not wholly caused by the suboptimality of rationalization, but also by certain individualistic attitudes prevalent here. Maybe I, or Eliezer Yudkowsky, or others, just don’t want to be the sort of person whose preferences the world can bend to its will.
Yes, and another meaning of “rationalization” that people often talk about is inventing fake reasons for your own beliefs, which may also be practically rational in certain situations (certain false beliefs could be helpful to you) but it’s obviously a major crime against epistemic rationality.
I’m also not sure rationalizing your past personal decisions isn’t an instance of this; the phrase “I made the right choice” could be interpreted as meaning you believe you would have been less satisfied now if you chose differently, and if this isn’t true but you are trying to convince yourself it is to be happier then that is also a major crime against epistemic rationality.
I wish you had gone more into the specific money pump you would be vulnerable to if you rationalize your past choices in this post. I can’t picture what money pump would be possible in this situation (but I believe you that one exists.) Also, you not describing the specific money pump reduces the salience of the concern (improperly, in my opinion.) It’s one thing to talk abstractly about money pumps, and another to see right in front of you how your decision procedure endorses obviously absurd actions.
Like, as far as I’m concerned, I’m trans because I chose to be, because being the way I am seemed like a better and happier life to have than the alternative. Now sure, you could ask, “yeah but why did I think that? Why was I the kind of agent that would make that kind of choice? Why did I decide to believe that?”
Yes, this a non-confused question with a real answer.
Well, because I decided to be the kind of agent that could decide what kind of agent I was. “Alright octavia but come on this can’t just recurse forever, there has to be an actual cause in biology” does there really?
In a literal/trivial sense, all human actions have a direct cause in the biology of the human brain and body. But you are probably using “biology” in a way that refers to “coarse” biological causes like hormone levels in utero, rather than individual connections between neurons, as well as excluding social causes. In that case, it’s at least logically possible that the answer to this question is no. It seems extremely unlikely that coarse biological factors play no role in determining whether someone is trans (I expect coarse biological factors to be at least somewhat involved in determining the variance in every relevant high-level trait of a person), but it’s very plausible that there is not one discrete cause to point to, or that most of the variance in gender identity is explained by social factors.
If a brain scan said I “wasn’t really trans” I would just say it was wrong, because I choose what I am, not some external force.
This seems like a red herring to me—as far as I know no transgender brain research is attempting to diagnose trans people by brain scan in a way that overrides their verbal reports and behavior, but rather to find correlates of those verbal reports and behavior in the brain. If we find a characteristic set of features in the brains of most trans people, but not all, it will then be a separate debate as to whether we should consider this newly discovered thing to be the true meaning of the word “transgender”, or whether we should just keep using the word the same way we used it before, to refer to a pattern of self-identity and behavior, and the “keep using it the same way we did before” side seems quite reasonable. Even now, many people understand the word “transgender” as an “umbrella term” that encompasses people who may not have the same underlying motivations.
Morphological freedom without metaphysical freedom of will is pointless.
If by “metaphysical freedom of will” you are referring to is libertarian free will, then I have to disagree. Even if libertarian free will doesn’t exist (it doesn’t), it is still beneficial to me for society to allow me the option of changing my body. If you are confused about how the concept of “options” can exist without libertarian free will, that problem has already been solved in Possibility and Could-ness.
I’ve noticed people using formal logic/mathematical notation unnecessarily to make their arguments seem more “formal”: ∀x∈X(∃y∈Y|Q(x,y)), f:S→T, etc. Eliezer Yudkowsky even does this at some points in the original sequences. These symbols were pretty intimidating to me before I learned what they mean, and I imagine they would be confusing/intimidating to anyone without a mathematical background.
Though I’m a bit conflicted on this one because if the formal logic notation of a statement is shown alongside the English description, it could actually help people learn logic notation who wouldn’t have otherwise. But it shouldn’t be used as a replacement for the English description, especially for simple statements that can easily be expressed in natural language. It often feels like people are trying to signal intellectualism at the expense of accessibility.
What are you talking about then? It seems like you’re talking about probabilities as being the objective proportion of worlds something happen in in some sort of multiverse theory, even if it’s not the Everett multiverse. And when you said “There won’t be any iff there is a 100.0000% probability of annihilation” you were replying to a comment talking about whether there will be any Everett branches where humans survive, so it was reasonable for me to think you were talking about Everett branches.
But are you sure the way in which he is unique among people you’ve met is mostly about intelligence rather than intelligence along with other traits?